Building a Forex trading platform using Kafka, Storm and Cassandra

Insight
Insight
Published in
7 min readOct 22, 2014

--

Want to learn Kafka, Cassandra, and other big data tools from top data engineers in Silicon Valley or New York? The Insight Data Engineering Fellows Program is free 7-week professional training where you can build cutting edge big data platforms and transition to a career in data engineering at top teams like Facebook, LinkedIn, Slack and Squarespace.

Learn more about the program and apply today.

Janusz Slawek, is currently a data engineer and was an Insight Data Engineering Fellow in the inaugural June 2014 session. Here, he gives a high level overview of the data pipeline that he built at Insight to handle Forex data for algorithmic trading, visualization, and batch aggregation jobs.

The foreign exchange market, or forex, is the biggest and the most liquid exchange service in the world with over $4 trillion worth of trades made every day. It is a truly global marketplace that only sleeps on weekends. As a fascinating business that takes its roots from ancient history, forex has continuously advanced with technology over the years. However, just like in the old times, being successful at trading takes an analytical mind and a gambler soul as it requires the trader to manage a great deal of risk and stress. While the established financial institutions use expensive systems to execute the trades, e.g., ultra-low latency direct market access software, individual investors only have a few simple tools at their disposal, e.g., Meta Trader or Ninja Trader. Affordable software exists and integrates well with the brokerage services. It often allows executing custom trading algorithms. However, it does not allow analyzing rich financial data, which is crucial to making informed trading decisions or building trading algorithms.

To address this problem, I created a forex trading platform called Wolf. With Wolf, we can now visualize financial data in real time, execute trading orders with little latency, and analyze historical events off-line. It is simple to use and seamlessly integrates with the external brokers and data providers.

I composed Wolf of a group of services that are shown in Figure 1.

Figure 1. Architecture of Wolf.

The Inputs

Wolf processes two types of inputs: updates to the conversion rates of seven main currency pairs, and trading orders from investors. In Figure 1, inputs of the first type originate from a “Data provider” service at the bottom. Inputs of the second type are served to the system from “Rule API”, a module located in the top-right corner. The first stream of information is essential to the operation of Wolf. It is extracted from the data aggregated by HistData.com site. They are served to the system with a resolution of up to one “tick” per millisecond, i.e. a conversion rate of each currency pair is updated at most once a millisecond. At the same time, users of the system provide the second stream of events by submitting trading orders via a web interface or a RESTful API. Both types of inputs enter a multiplexer, see Figure 1. The multiplexer is implemented using Kafka, a persistent queue, which is resilient to hardware failures, has a tunable capacity, and allows buffering of data over a specified period of time.

Routing data with different velocities with Kafka

I created three classes of consumers for events from the multiplexer: a rule engine, a real-time visualization service, and a batch aggregation service. They are located directly above the multiplexer in Figure 1. The rule engine is able to pull every millisecond not interrupting a real-time visualization layer, which consumes every five hundred milliseconds. At the same time, the aggregation service consumes orders of magnitude slower, every fifteen minutes. These three consumers process data at very different rates, because they represent three different use cases of Wolf. The rule engine must execute trading orders from investors with very little latency. The visualization layer, or a grapher, must appear interactive to users but not saturate the network. The aggregation layer must process events in large quantities. These three are able to trade high throughput for a low response time, or vice versa, thanks to Kafka’s uniquely designed consumer groups.

Data pipeline for the rule engine: very fast

Trading orders are expressed in terms of “if then” rules. An example of such a rule could be the following statement: “If a conversion rate of EUR to USD is less then 1.2, then buy 100 units.” A rule engine must quickly match a large volume of such rules with the ever-changing market. In other words, an investor wants the above trade executed right when the conversion rate drops below the specified threshold. It is a challenging problem as conversion rates fluctuate dynamically. The module of Wolf responsible for executing trading orders is called the rule engine. I implemented it on top of a Storm event processor. Storm is a battlefield-tested solution that integrates very well with Kafka. It allows creating a custom processing flow, i.e. a topology. Below is a run-time visualization of the topology that runs on Storm cluster:

Storm takes care of serializing, routing, and replaying events from the source in case of failures. It allows building distributed topologies and injecting user-defined business logic. I delegated the actual action of buying and selling currency to an external brokerage service.

Data pipeline for the real-time visualization service: fast

The second consumer of events from Kafka is a real-time visualization service. It aggregates the latest updates to the market for four hours. Because events come sorted by a timestamp, I decided to take advantage of yet another very well known open-source solution, the Cassandra database. It is designed to efficiently store series of ordered data. Cassandra associates keys with sorted lists and stores them effectively using sorted string tables. They are replicated among servers that form a logical ring with no designated masters or slaves. By design, Cassandra is resilient to failures and replicates data over multiple data centers, which makes it a highly-available distributed data store. It allows tuning consistency with read/write levels. It is a very capable solution derived from DynamoDB and LevelDB databases. Nevertheless, it is a very complex system that offers global counters, lightweight transactions, and much more.

Data pipeline for batch aggregation service: slow

The last consumer of events from Kafka is a batch aggregation service. It is designed to store all the historical events, hundreds of terabytes of data. I decided to use Camus to collect data from Kafka and persist them to a Hadoop cluster. I used Hive to calculate aggregated views, e.g. I transformed the data to a lower resolution by averaging conversion rates over time and sent these views to the real-time visualization service. This approach allows visualizing data in different scales. That way I could graph the latest minute of data with a resolution of one millisecond and the latest hour of data with a resolution of one minute to avoid sending an excessive amount of time points to a client.

Putting It All Together

On top of the real-time visualization service, I built a serving layer that prevents users from querying data stores directly and improves the response time of Wolf. It is represented as the “Cache” module in Figure 1. A client-side code periodically polls the serving layer for the latest data. To graph the data, I used the Flot JavaScript library that supports plotting real-time series in a web browser.

To summarize, from the technical point of view, the most challenging part of building a trading platform was to properly fit together a few moving parts and allow processing a high volume of events with little latency, as well as visualizing and persisting them to a reliable storage. To solve this complex problem, I created a prototype first and only later replaced it with a range of battlefield-tested solutions. To have Wolf working end-to-end and to glue together an initial wireframe of distributed services, I used a Flask microframework and a couple of shell scripts. This allowed me to quickly implement a proof of concept, successively replace the mocked-up services, and iteratively improve the design of the system. I believe that this methodology was really the key to the success of this project.

Feel free to check out the Wolf repository on Github to learn more.

Interested in transitioning to career in data engineering?
Find out more about the
Insight Data Engineering Fellows Program in New York and Silicon Valley, apply today, or sign up for program updates.

Already a data scientist or engineer?
Find out more about our
Advanced Workshops for Data Professionals. Register for two-day workshops in Apache Spark and Data Visualization, or sign up for workshop updates.

--

--