// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>
Microprocessors have traditionally dominated the realm of computing, and in this drive toward more compute capabilities, silicon-based ICs were consistently improved upon in device density. Every couple of years, the density transistor doubled in accordance with Moore’s Law, leading to a veritable goldmine of technological innovation. Amid this “gold rush,” several large semiconductor companies created a foothold in the market that has been unshakeable given the expertise and expense of creating custom silicon. Now, with the ever-increasing demand for compute-hungry devices, ASICs, FPGAs, and embedded processors alike are being asked to perform more complex tasks. The barriers to entry have been far too high.
On the end-application side of the spectrum, the implementation of a system depends heavily on the bottom line. The cost of the IC, licensing fees, and ease of programmability all directly contribute to the price to create modern electronics. Designers and developers alike have relied on both ASICs accelerator and traditional embedded processor solutions in application-specific standard products (ASSPs) to achieve the core functionality of their design while also benefiting from the well-understood design flow of generic processors. The goal: a rapid time to market (TTM) with minimal cost while meeting the growing compute complexities demanded in modern applications. The question then becomes: Which chipset (s) is best able to run the application most efficiently with the least development time and cost?
This is where Efinix FPGAs fill the gap. Efinix FPGAs marry the inherent architectural benefits of FPGAs with the ease of programmability of popular embedded processors by implementing a straightforward software approach for hardware and software partitioning. This article discusses this new design methodology that fully realizes the potential of FPGAs, allowing designers to maximize system design and efficiency.
The fundamental advantages of the Efinix architecture
Efinix Titanium FPGAs strike a balance between the low-end and high-end FPGAs, with a high-density device in a small package, with low power consumption, and at a low price point. Using 16-nm process technology, Titanium FPGAs can pack in up to 1 million logic elements (LEs), integrated memory blocks, and high-speed DSP blocks in packages as small as 5.5 × 5.5 mm.
This is due to Efinix’s Quantum compute fabric with reconfigurable tiles, or exchangeable logic and routing (XLR) cells. This does away with traditional routing and allows LEs to be smaller and used more flexibly, resulting in a remarkably high utilization when compared with traditional FPGAs. Soft RISC-V cores are instantiated when required in the FPGA fabric. During compilation, Efinix’s software dynamically chooses between using an XLR cell as routing or logic to optimize its silicon resources. Efinix FPGAs implement RISC-V CPU architectures without being bound to proprietary IP cores such as ARM, bypassing any licensing fees.
The rise of RISC-V
RISC-V is a free instruction set architecture (ISA) that comes with a litany of software examples, IP cores, and physical hardware. The main difference between RISC-V and ARM is that RISC-V is an open standard wherein the ISA does not define any specific microarchitecture. Other popular processing technologies (x86, x64, and ARM) have business models built around the payment for the right to use the vendor’s ISA and microarchitecture / hardware.
The growing popularity of the modular, open-source RISC-V architecture allows developers to use these cores royalty-free and create non-vendor – specific compute solutions. This opens doors for innovation as Moore’s Law continues to slow down. RISC-V processing cores can be integrated into FPGAs to combine ease of programmability with the parallel processing and flexibility / reconfigurability of the FPGA and ASIC architecture.
A new design methodology with Efinix FPGAs
The merit of Efinix FPGAs does not end at their optimized cost and performance. The concept of “quantum acceleration” also brings the same level of ease of programmability that traditional embedded processors enable. Quantum acceleration relies on two key techniques to ease and optimize both the design flow and the design itself:
- The use of the RISC-V processor
- The use of the quantum accelerator
Benefitting from the ease of programmability of the RISC ISA with embedded RISC-V cores
First, highly scalable RISC-V processors are used as the workhorse of the system, ensuring that the system’s functionality is expressed to the greatest extent possible in software. An inherent benefit that comes with using a RISC-V processor is the custom instructions that can be used to extend the capabilities of the processor to the requirements of the application. This ensures that it performs highly accelerated tasks natively on the processor with maximum efficiency, all while sticking to the familiar C / C ++ language.
So a designer that, for instance, wants to perform a convolution while programming in C for a conventional embedded processor without custom extensions would have to break this down into several simpler instructions. With RISC-V custom instructions, this can be performed within one single execution. These application-specific instructions greatly reduce the number of cycles that would have to occur using standard instructions and massively improve system efficiency by reducing the power consumed. For convolutions that are typically used in artificial intelligence algorithms, the use of custom instructions accelerates the convolution by a factor of 40 × to 50 ×. This results in a significant increase in system performance.
Additional benefits of custom instructions include time to market with a broader portfolio of products. One partner of Efinix has created a library of hundreds of custom instructions that can be instantiated and called on demand. The result for them has been that a broad portfolio of end-user products can be defined, created, and rapidly delivered with a common hardware platform, differing only in the software optimization of the RISC-V processor.
Using the architectural flexibility of the FPGA for straightforward acceleration
Custom instructions, however, tend to be small. There may be cases when users want to, for instance, do a mathematical function on bigger blocks of data. The quantum accelerator socket defines a framework that gives the user the ability to “point at” data, retrieve it, and edit its contents as per the application’s requirements with ease. This accelerator socket has specific inputs and outputs to the accelerator function, RISC-V processor, direct memory access (DMA) controller, and other processing blocks (see figure). DMA, callbacks, and hookups to the RISC-V processor are all called in C and performed for the programmer nearly automatically. All that is required is a tiny bit of VHDL for acceleration, many examples of which Efinix already provides.
Figure 1: The Efinix quantum accelerator socket has specific inputs and outputs that enable users to point to large blocks of data to retrieve and edit them for hardware acceleration. This socket allows for subsequent data movement to occur seamlessly and with minimal VHDL design effort so that designers can simply focus on choke points in performance. (Image link: https://www.efinixinc.com/art/riscv-standard-wrapper.png)
The benefits of this approach can be seen in a dramatic reduction in time to market. Using this predefined acceleration construct, the Efinix partner was able to produce a camera system with an input sensor, an artificially intelligent core performing object detection and classification, and an output display subsystem. From project start to a fully functional demonstration vehicle, production took a little over one week.
The micro and macro approach to hardware acceleration
To sum it up, Efinix FPGAs leverage both the inherent benefits of the open RISC-V standard in tandem with a custom accelerator framework to use custom instructions or to rapidly modify large blocks of data. This allows electronic design and manufacturers to meet the goals of:
- Minimizing the TTM with an optimized software flow design
- Minimizing design and production costs with an architecturally, software-defined flexible platform
- Creating a future-proofed design that both meets the compute needs of modern applications and can be easily upgraded with additional features
The implications of an easily programmable FPGA
Efinix is set to take the FPGA from the traditional intriguing design alternative to a design necessity. These design techniques blur the lines in traditional system architectures. The availability of cost-effective, low-power FPGAs that can be configured with the speed of traditional embedded processor solutions can drive the mass market adoption of these platforms by:
- Replacing ASSP designs entirely with an integrated FPGA solution
- Expanding upon the features of existing MCUs to adapt it to new requirements and new markets
- Replacing embedded processors without losing out on the simplicity of the traditional embedded processor (eg, ARM) design flow
The time it takes to conceive and produce a solution is already massively decreased over custom silicon solutions such as ASICs. ASSPs can be replaced by adaptable FPGA designs with custom functionality at a price point that drives their adoption. Efinix embedded RISC-V processors can also be customized to emulate and expand upon the features of existing MCUs. From a starting point of an emulated, familiar MCU, enhanced custom capabilities can be instantiated in the FPGA fabric alongside custom acceleration blocks and I / O signal conditioning.
Making FPGAs a design necessity
The most significant paradigm shift lies in the ability to quickly innovate with the simplicity of the traditional embedded processor (eg, ARM) design flow. The highly integrated Efinix solution contains derivatives of the now-familiar controller architecture with highly accelerated companion blocks, all within the same silicon die. This opens doors for Efinix FPGAs in more diverse markets outside of edge computing, from basic IoT devices to data-center cards.
Designers can now leverage FPGAs on simpler designs that traditionally use a generic embedded processor with a powerful FPGA. Projects that have conventionally leveraged a standard CPU or MCU, such as an IoT sensor node, are now replaced by an FPGA consuming the same amount of power (if not less) in a small form factor, at a low cost, and with a straightforward software design flow. The benefit of this is that these designs are intrinsically future-proofed and upgradeable, given the architectural flexibility of the Efinix FPGA platform.
Designers making the push toward more compute-hungry devices can also leverage Efinix FPGAs. This allows businesses to shift from the traditional IoT and broadband use cases of home automation, machine monitoring, and basic HD video streaming on a mobile device to next-generation applications such as autonomous vehicles, seamless immersive-reality experiences (AR / VR headsets) , and more. Efinix FPGAs also meet these edge-based bandwidth-hungry or time-sensitive use cases. Businesses can meet the future of computing with the right processing capability to train, run, and upgrade their respective machine-learning algorithms efficiently.
Efinix FPGAs can simplify the design cycle dramatically, immediately shedding the requirement of hardware development for ASICs and easing the process of hardware acceleration that is less straightforward with CPUs and GPUs. The accessibility of these FPGAs has huge implications, allowing for adoption into new markets where FPGAs may not have previously been seen as a viable solution. In opening the hardware design environment to familiar software techniques, Efinix is massively enlarging the scope of designs that can be ported to FPGAs, further shortening TTM and increasing the flexibility of the end-user application. The resulting cost and density advantages will see FPGAs expand in market reach well beyond traditional application spaces and drive adoption into applications that can touch every aspect of our lives.