The financial desktop is going through its most fundamental change in 20 years – as a confluence of forces come together.  End users have become overwhelmed by the sheer amount of data and applications they have to work with.   Large data providers have consolidated and their products have become monolithic, while a new breed of independent workflow and data apps are offering choice and flexibility around their specific niches.  Cloud providers are creating data marts, facilitating data discovery and delivery.  Meanwhile desktop technology is evolving to help end users cope and make better use of what is available to them. 

But there is much left to be addressed:

  1. Improved interoperability between desktop applications - even with basic “context sharing” there is still too much jumping from one application to another:  
  2. The integration of the underlying data suffers from various degrees of accessibility, and when integration of such data is required it’s either expensive or requires heavy manual intervention;
  3. User generated solutions are prone to errors creating risks that need to be mitigated through compliance controls; and
  4. New sources and delivery mechanisms (eg cloud marketplaces)  and new tools and capabilities (no-code, python) introduce new elements into the mix.

End Users:  Information Overload


With the increasing size and complexity of financial markets has come a proliferation in the amount of information that end users need to assemble and assimilate to get through their daily tasks. 

End user technology for financial data fall into the following broad categories:

  •  “Terminals”: Desktop applications that bundle a wide variety of financial data – typically includes an excel add in and API access.  Bloomberg, Refinitiv, S&P CapiQ and Factset are the major players.
  • Third party apps for specialized workflow and analytics such as OMS and EMS, chat (messaging) and analytics firms.  These tools may include excel add-ins and API access.
  • Internal apps developed by in house technology teams which may include proprietary and third party data.  The underlying data may be available through the app or with access to the data lake on which the application is built.
  • Other internal data may be available on shared drives and in the cloud in the form of a data lake.

Many of the apps are essentially data viewers - tools for displaying and interacting with data through predefined views and with standardized calculations.  But users increasingly want to liberate the underlying data, combine data from multiple sources, run regressions and conduct custom calculations.   This requires a lot of creativity, cutting and pasting, exporting and ultimately importing into tools like Excel, Tableau and PowerBI.  

As a result the average front office user is interacting with 40 applications, and a lot of their workflow is custom built.  This has implications for the firms where these users work – not just about productivity but also risk.  A 2008 study carried out by the University of Hawaii found that almost 90% of spreadsheets contained errors – and when money is concerned errors can quickly translate into huge losses.

Data Providers:  Sources are Diversifying Despite Consolidation at the Top

A key trend in the industry has been consolidation in the high end - with the four largest players accounting for roughly 85% of the spend on financial applications.   Beyond these big players,   there are plenty of more specialized providers of data in areas such as insider trading, short interest, corporate events, dividend forecasts and sentiment on news and twitter. Delivery of this data can be in a variety of forms – some firms have a front end application,  most offer bulk delivery via secure FTP delivery and a few have well designed APIs.  An increasing number are marketing their data through specialized data marketplaces – such as AWS Data Exchange, and Snowflake Data Marketplace.

These cloud data marketplaces help data providers find new clients and to manage data distribution.  That said, the delivery mechanisms are basic and typically bulk in format.  The lack of a single consistent data model means the data still needs a lot of work to integrate – work that will need to be done by internal technology teams or expert end users. 


Application Interoperability: Progress but more to do

Applications are gradually becoming more interoperable with the adoption of FDC3.  The Financial Desktop Connectivity and Collaboration Consortium (FDC3) is an agreed set of industry standards and protocols that determine how financial apps may interoperate.  Adoption of FDC3 is paving the way for a wide variety of applications (C++, .NET, Java, Python) to share context and pass parameters between compliant applications.  A new breed of desktop framework providers (OpenFin, Cosaic and Glue42) are leveraging these standards and enabling more seamless integration of apps that adopt the standard.    What is encouraging here is that some large and many small providers are adopting the standard, as well as a growing list of internal buy and sell-side technology teams at large financial institutions.

FDC3 represents a significant step forward, but further evolution is required.  For example, context passing allows one application (say a trade blotter) to share a ticker with a chart app, while news for the same stock may be shown in another window.  This is very much a broadcast model – whereas the most common user need is to consolidate/aggregate data from multiple sources into one app.  For example – to show a stock price, dividends and trades all on one chart with the data delivered from 3 applications.  The recent release of OpenFin’s WorkSpace with a centralized Notification Center is a step in this direction.


Data Interoperability: Still a challenge

Here the challenges remain more acute.  Attempts at creating an open data model for financial services have been slower coming – the closest and perhaps most ambitious is the EDM Councils FIBO initiative which, absent the full cooperation of the large vendors has struggled to gain traction.  The larger vendors operate on proprietary data models and schemas which will be hard to normalize and create a standard around.   


New Tools and Old Tools with New Tricks

As mentioned, end users have had to be creative when it comes to accessing and combining data – overcoming the skill gap, as well as significant inconsistencies in data delivery mechanisms, data schemas and access controls. 

Here, Python has emerged as a popular data science and general purpose data manipulation language – which some have claimed is the “new Excel” for traders.   It certainly has many advantages over Excel especially when it comes to processing large amounts of data – hence its popularity with quants and data scientists.   Python in combination with Jupyter notebooks also helps end users to share and collaborate, and getting going is easy with a growing number of  finance orientated libraries such as Numpy and Pandas.   

However, Excel will continue to be widely (and wildly) used across the industry.   Built-in functions, as well as vendor add-ins can make it a breeze to bring in data and to build analytics.  Little or no training is required, and results are quick.  To overcome the compliance and risk issues, some big firms now require spreadsheets to be reviewed, especially if they are to be used in high value use cases where a simple error can be costly. 

A new category of software has emerged known as No-Code.  No code development platforms allow end users to create application software through a graphical interface instead of through computer programming.  UI Path and Unqork have built out financial services practices, while Beacon is fully focused on the banking vertical.  These platforms claim to address the institutional issues with Excel through a more locked down and controlled environment, versioning and governance

Microsoft has also been responding to the threat posed by other approaches to processing data.  With over a billion users they have a lot to defend.    A notable change is the addition of an ETL-like capability called PowerQuery.  You have long been able to pull data in from the web and certain types of API -  PowerQuery allows you to do this in a much more systematic and programmatic way.  Once connected, a user can apply a wide variety of standard data transformations before it is delivered into the workbook.    PowerQuery is no-code, although the underlying  scripting language (called M Query) can be viewed and edited too.


So what is the future? 

We believe each of the challenges above -- information overload, data provider diversification, application interoperability, and data operability -- represent significant opportunities for all players in the finance ecosystem. We can't run from them.

In the second part of this series, we will outline some of the promising answers to these challenges we see in the market. The future of the finance desktop is bright.

Contact Us

We are ready to accelerate your business forward. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.