Latency Arbitrage NYSE-Style

On Friday the SEC announced that the NYSE Agreed To Settle Charges By Paying First-Ever SEC Financial Penalty Against An Exchange.  This miniscule $5million fine was paid by the NYSE, without admitting or denying the SEC’s charges:

SEC Regulation NMS (National Market System) prohibits the practice of improperly sending market data to proprietary customers before sending that data to be included in what are known as consolidated feeds, which broadly distribute trade and quote data to the public. This ensures the public has fair access to current market information about the best displayed prices for stocks and trades that have occurred.

The exchange violated this rule over an extended period of time beginning in 2008 by sending data through two of its proprietary feeds before sending data to the consolidated feeds. NYSE’s inadequate compliance efforts failed to monitor the speed of its proprietary feeds compared to its data transmission to the consolidated feeds.

 

 

This is ground-breaking, and things are starting to heat up in market structure world. Folks are wizening up to what happens when a for-profit exchange model is in charge of fairness, and the effect this ultimately has on investor confidence. We suggest you stay tuned, as it would reason that this is the first of perhaps other similar announcements.

We at Themis have been raising awareness surrounding the numerous integrity issues in our fragmented for-profit stock market since 2007, including the topic of  Latency Arbitrage. In fact back in Q4 2009, nearly three years ago, we wrote a white paper titled Latency Arbitrage: The Real Power Behind Predatory High Frequency Trading. Simply, we claimed that there exists a fractional second advantage of direct data feeds distributed from co-located datacenters over the Public Quote (CQS SIP), and that this advantage results in automated scalping and risk-free arbitrage profits at the expense of long term investors. We wrote that this was a serious issue regarding market integrity. Specifically, we stated:

Who would bet on a horse race if a select group already knew who won? … It is interesting to note that some of the exchanges make sure that each co-located customer receives equal amounts of connecting cable, so that a server at the northeast corner of a facility has the same latency as one at the southwest corner.  It appears that “fairness” and the equalization of market data speed among co-located firms is an important “must” for the exchanges, but not so when it comes to all other institutional and retail investors.

At the time we were ridiculed and disputed publicly by conflicted insiders.  Tradeworx’s Manoj Narang wrote in his April 21st, 2010 comment letter to the SEC that latency arbitrage does not exist, and “debunked” our white paper by name on page 16 of his letter. Obviously we were not deterred in our beliefs. Nor were other important market structure players, like Eric Hunsader at Nanex, as well as the Zerohedge blog.

As a matter of fact, please refresh your memory with this August 23rd, 2010 article on Zerohedge, titled “Do It Yourself” Latency Arbitrage: How HFTs can manipulate the NBBO at Whim Courtesy of NYSE Empty Quote Gluts.

Zerohedge’s money line:

…this is a stunner, as there is no reason why the NYSE should delay data hitting one data stream, the CQS, but not its own premium product, OpenBook – this would mean there is a gating factor that is essentially imposed artificially for the plebes (and the bulk of investors) who use the commodity CQS tape. It would also open up questions as to how often this form of borderline illegal arbitrage occurs on a daily basis on the NYSE

Yet again, a group of “conspiracy theorists” like Senator Ted Kaufman, Nanex, Themis Trading, and Zerohedge, have been proven shockingly and sadly accurate with their observations of market structure flaws and suspicions.

Circling back to the NYSEs improper distribution of market data to HFTs before the public, the SEC notice states:

Since the inception of this feed in June 2008, NYSE often made its data available to customers sooner than NYSE sent data to the Network Processor.  Second, NYSE structured the other proprietary feed to operate independently of the system that sent data to the Network Processor.  As a result, this other proprietary feed was not affected when delays were experienced by the NYSE system that sent data to the Network Processor. Third, NYSE’s internal system that sent data to the Network Processor had a software issue that caused delays during multiple periods of high trading volume from early to mid-2010.  During these periods, NYSE often sent data to the Network Processor after NYSE sent data to customers through the two proprietary feeds at issue.

You may hear weak stock exchange defenses that this was a “software issue.” After all, it is fashionably convenient of late to blame the techies for a botched IPO, or “knuckleheads” for an algo gone amok. Those of you paying attention since 2008 know differently; you all know that this is another example of the for-profit exchange model sacrificing fairness and integrity in favor of short-term profits derived from their largest customers. And if this has been going on at the venerable NYSE, you better believe that investors everywhere are questioning if similar integrity and fairness issues are plaguing other for-profit stock exchanges, such as NASDAQ, BATS, Direct Edge, and even dark pool ATSs. Are there slowdowns and speed bumps with their market data as well?

I suppose we should keep our eye peeled for other SEC announcements, as it is reasonable to assume that they have all been / are being investigated.

Stay tuned this week; we will be dissecting the SEC’s NYSE findings – there are other issues involved – such as section 17a, the Consolidated Tape Association and who runs it, as well as Rule 603a violations.