Nicole Hemsoth
January 22, 2017

At the end of 2016, Amazon Web Services announced it would be making high-end Xilinx FPGAs available via a cloud delivery model, beginning first in a developer preview mode before branching with higher-level tools to help potential new users onboard and experiment with FPGA acceleration as the year rolls on.

As Deepak Singh, General Manager for the Container and HPC division within AWS tells The Next Platform, the application areas where the most growth is expected for cloud-based FPGAs are many of the same we detailed in our recent book, FPGA Frontiers: New Applications in Reconfigurable Computing. These include crypto and security, genomics, financial services, and a broader set of machine learning workloads. “For security, genomics, and financial services, there are already a lot of use cases for FPGAs. We are still waiting to see how FPGAs will interplay with machine learning applications—that’s a very broad area, but we are providing the tools and support with more to come to meet that growth.”

Singh has been leading efforts focused on specialized workloads, including high performance computing, for well over a decade at AWS and has been active in the shift from general purpose cloud (the bread and butter at AWS) to far more specialization in terms of new compute and memory intensive instance types to GPU acceleration for HPC and other workloads. In addition to monitoring the hardware infrastructure and application requirements for these more unique workloads, Singh says he’s also been tracking broader trends that impact both, including machine learning and its associated hardware demands.

Machine learning is a focus for FPGAs, Singh says, but with the trends toward specialization for certain customers at scale, he says it is still too early to tell where reconfigurable devices might fit in. “It could be GPUs, it could be CPU-only, it could be a custom ASIC, tensor processor, or even an FPGA. It could also be some kind of combination of those.” Ultimately, he says that because of trails that were already blazed with GPUs and the rich programming environment and ecosystem around them, users are more open to exploring new architectures and accelerators. That will continue to be the case, at least for a certain class of users with more interesting (non-general purpose) workloads and his team will keep an ear to the ground for new architectures to add to the AWS pool of options. For now, however, Singh says that the early interest that fed the FPGA instances will bring a new group of users to the AWS table—and will also engage partners with FPGA expertise like those mentioned above.

Singh says that for many end users that have moved most, if not all of their general purpose workloads to the cloud, on-site datacenters are a thing of the past, minus certain appliances that might be accelerating a single, specialized workload. “There might be nothing left but an FPGA-based appliance because everything else has been moved out.” For these cases, he says the existing relationships with hardware and appliance vendors that are FPGA-oriented can eventually be moved to the Amazon cloud as well. There are not a large number of these vendors with FPGA-centric machines, however as we have profiled in the past, Edico Genome (which offers an FPGA-accelerated genomics platform) and Ryft (an FPGA-driven appliance maker focused on large-scale analytics) are both partners in Amazon’s effort to capture new specialized workloads with the F1 instance.

As Singh notes, the GPU ecosystem was already mature when AWS added GPU instances to the mix. For FPGAs, however, there is still a long road ahead to create an offering that is broadly accessible. The advancement of this is incumbent on the vendors and ISVs to keep pushing, but AWS itself is going to focus on opening access by providing far beyond the hardware development kit in the months to come.

“We do plan to support high-level tools, including SDAccel, which includes OpenCL to expand the reach for developers. There is also an existing set of software developers who understand FPGAs well and know how to use the core tooling. What those users are looking for is a channel—a way to provide their software to a broad set of customers, while at the same time we are working on usability and accessibility by taking input about the kinds of libraries, programming, and other tools people need for better access,” Singh explains. “We want to make sure that programming and using FPGAs is not the barrier.”

Singh says that the lead up to create an FPGA instance type did not create much internal debate. Those on his team at AWS who focus on specialized computing (HPC, accelerated applications, etc.) have been watching interest grow in FPGAs along with compute requirements from some of the core markets that could most benefit, including those who have used FPGAs for years (financial services, for example), those that are getting a relatively newer push toward FPGA acceleration (crypto/security, genomics), and those that are looking for any combination of processing engines to meet the needs of a rapidly evolving set of algorithms and approaches (machine learning and deep learning).

As we have noted in several articles here over the last two years in particular, the trend toward specialization is pushing companies to look beyond standard CPUs and into ASICs, GPUs, and FPGAs. This increasing heterogeneity for applications at the high-end of the compute spectrum is spurring more interest in novel or non-standard architectures. The acquisition of Altera by Intel last year, the addition of FPGAs in the public cloud, and the growing set of applications that can benefit from FPGA acceleration all point to a very big year indeed for FPGAs—something we will track extensively in 2017.

https://www.nextplatform.com/2017/01...et-trajectory/