Thursday, June 20, 2024

EXCLUSIVE: “The NIC of Time” – Dan Brown, Michael Skory and Bob diPietro, Cisco in ‘The Fintech Magazine’

Today’s traders need smart options if they are to react to the market in real time. We asked three low-latency experts from Cisco how two technologies, used in combination, can helpDaniel Brown, Cisco | Fintech Finance

The network interface card, aka NIC, has been around for a while; but it’s the technology’s more intelligent ‘brother’, SmartNIC who has been turning heads more recently. This capable beefcake goes by several aliases – probably the dullest acronym is data processing unit (DPU) – but, at heart, he’s a networking adapter card with a programmable processor.

Many say SmartNIC’s full enterprise potential hasn’t yet been realised, partly because of the investment needed. But in environments where cost, time and adaptability are all finely balanced, as in the case of financial services and, particularly, in high-frequency trading (HFT), the numbers soon start to add up. When a SmartNIC carries an integrated circuit that can be programmed in the field of operation – a field programmable gate array (FPGA) – it allows all manner of computational add-ons to be accessed by a customer, using open-source tools. Combined with the processing power of a SmartNIC – which can offload work from a central processing unit (CPU) and shift network packages 40-times faster than traditional high-performance NICs – this double act can change the nature of trading and literally buy time. It’s been said that the return on investment (ROI) can be measured in fractions of a second.

So, we asked three ultra-low latency experts at Cisco, one of the companies leading developments in this area for financial services, to help us get better acquainted with SmartNIC and FPGAs.

Dan Brown is a technical solutions architect, responsible for ‘anything that’s related to the nanosecond or even lower’, who works alongside fellow ultra-low latency specialist Mike Skory, and Bob DiPietro, an ultra-low latency technical solutions architect with particular experience in toolchain development for finance markets. We began by asking them to give us a history lesson.

THE FINTECH MAGAZINE: Before programmable NICs, what did the industry have to settle for? And when did things really start to speed up and why?

Dan Brown: The latency race started in 2007. Ever since then, people have been building technology infrastructure to basically reduce the length of time it takes to trade on exchanges.
Michael Skory, Cisco | Fintech FinanceBob diPietro: There’s been a whole evolution. It was originally based on NICs, with everything going back to a processor via the kernel stack, and all logic based in the host. Then, software-based stacks and kernel bypass arrived. Next, with FPGAs, some or most of that work that was done in the host could be offloaded and the latency reduced. You can now program in the FPGA and avoid the whole chain up to the host and back. For more complex problems, you can also use the host in conjunction with the FPGA. For example, you can create a hybrid stack, where you don’t have to build an entire transmission control protocol (TCP) engine inside the FPGA and yet still be able send TCP in the FPGA for ultra low latency. This avoids having the limited functionality, performance, and resource constraints that come with putting a TCP engine in a FPGA.

DB: At Cisco, we have a few different categories of NICs – the X25 and X100, which we’d classify as our drop-in NICs; they can be used to accelerate communications to the network. You can apply our kernel-bypass stack to that
and get into the sub-microsecond range easily, for near-enough any application. The primary use case for those NICs has been in financial services for trading, but they do have applications outside of that.

The NICs that have been more focussed towards our FPGA development users, one of which is based on the Virtex UltraScale 5P and one on the VU9P, both from Xilinx, can work in conjunction with our firmware development kit (FDK), which is a full suite of tools to do development on FPGA cards. With that, you can go from knowing not too much about FPGAs, to knowing how to program in Verilog or VHDL, and move towards creating a financial service package that will allow you to start trading on the exchanges relatively easily.

TFM: Can we drill down into what SmartNICs loaded with a FPGA can do for traders and financial institutions, then?

Michael Skory: This is all about time to market; it allows customers the flexibility and creativity to meet their specific business objectives – their trading initiatives. In reducing their time to market, they can focus primarily on the logic. That is the key to it all.

DB: Put another way, it reduces the compilation times of our customers’ trading strategies. So what does that mean, in terms of use cases in the markets? Well, you compile an FPGA, or a firmware image, and deploy that onto a card, in order for that to start trading on the exchange. But what people like to do is change how the market is interpreting the data; they need to be able to change the way that the market is being seen, or maybe some of the parameters. Because that’s not easily done inside an FPGA, they recreate the FPGA image on the fly, and then redeploy that directly onto the card, in order to trade onto the exchange, without having any downtime or going against the strategy they might be working towards. So, it reduces the length of time needed to deploy and compile an image, and also gets them into the market a lot quicker.

BD: With the FPGA, it’s fully programmable and can easily be changed several times a day. FPGAs can have many connections and give the designer the ability to do parallel processing, and then, as a result of those computations, get an action out in the order of nanoseconds. It’s all about the goal of getting to an answer in real-time and acting on that answer in the lowest possible latency. The Cisco SmartNICs and FDK provide a platform for designers to use it in a way to achieve that goal.

TFM: So, does the framework allow you to accelerate operations over a typical processor, even if that has a higher clock speed? And how does introducing speed grade 3 help developers? 

BD: You can process streams of data, make custom code and don’t have to go back to the host to use a general-purpose processor. Basically, you can make your own logic, that is in hardware to process the data streams and with ultra-low latency. This is very powerful, because, in most cases, it is much faster than you could do it if you went to the CPU. Sometimes you may have a design, where, because the FPGA build tools have iterative algorithms that place logic and route connections within the FPGA, the tools may struggle to meet the timing. Speed grade 3 allows existing designs to meet timing easier and, in some cases, might speed up the design itself. Speed grade 3 also allows for new designs to meet timing that could not be met with the previous speed grade. Essentially, with speed grade 3, you might be able to synthesise/build logic faster, to accelerate existing designs, and to create optimised designs which were not achievable before.

TFM: Traders were looking at perhaps changing their algorithms once a day; how many changes are we looking at nowadays, then, with the speed grade 3?
MS: Back in the day, an algorithm used to last maybe a month, two months; now it could last hours. Sometimes, you need to make a change right away. And this allows you to do that. Speed grade 3 is about creating more margin for the compiling; it gives you the ability to react in real time, rather than in the past. You can take sections of code, manipulate it, and then, using the toolset, get the timing perfect, send it off, and there you go. And the better you are, the faster you are – so, it allows you to really react to changes in the market. What we’re giving them with speed grade 3 is the platform to do so.

TFM: From a business perspective, what does this mean for companies that embrace this technology?
DB: A lot more people are moving away from having solely software applications, and finding ways to deploy it inside hardware. At Cisco, we are trying to embrace the future with those customers by making sure anything within our optical latency line-up is FPGA-enabled. Alongside that, with our FDK, we can support the unlocking of all of our devices, so that people can put their own strategic components inside those; we do it not just on our NIC cards, but also across all of our switch line-up. We’re here to embrace our customers’ needs as they move towards an FPGA/hardware-driven journey and, going forward, I see the use of high-level synthesisers for the conversion of software applications into hardware becoming more and more of a day-to-day activity. I think it’s going to be introduced into a lot of our customers’ software development lifecycles, as well as their strategic operations.


 

This article was published in The Fintech Magazine #23, Page 19-20

People In This Post

Companies In This Post

  1. Bank Execs See Attracting Gen Z As One Of The Biggest Challenges Of The Year Read more
  2. Pro Con Artist Cautions ‘No One Is Un-Scammable’ As Revolut Warns More Scams Reported Among Gen-Z And Millennials Than Boomers Read more
  3. Corpay to Acquire Cross-Border Payments Company Read more
  4. ZA Tech Rebrands as Peak3, Raises US$35M Series A led by EQT Read more
  5. UK’s Global Fintech Community on Track for Further Integrity and Ethics Skills Boost with Innovate Finance and CISI Certificate in Ethical AI Partnership Read more