Compiler Design Services - ShuraCore | Compiler Development Services

Compiler Design Services

ShuraCore specializes in implementing new and modern ports: GCC, GDB, GNU libraries, Binutils, LLDB, LLVM utilities, and libraries. We are engaged in the optimization and adaptation of existing compilers for any hardware platform. ShuraCore team provides a full range of services for the development of compilers and interpreters of the following types:
We also work in the following areas: development of SDK, virtual machines, obfuscators, and code deobfuscators for our clients. We port debuggers and simulators to new hardware platforms, write high-speed optimizations. We also develop compilers for neural and tensor processors. We create developer tools based on the LLVM framework.

JIT and AOT

JIT (Just-in-time) compilers are used to improve the performance of interpreted programs. JIT compilation is about compiling a program into its code while the program is running. This compilation is also known as dynamic compilation. The advantage of a JIT compilation strategy is that it has complete knowledge of the target architecture on which the program is running. This gives the JIT system the ability to optimize the code for a specific processor.
The AOT (Ahead-of-Time) compiler is used to compile a higher-level programming language into native hardware-dependent machine code. As a result of this compilation, the binary can be executed natively. AOT compilation is the compilation of intermediate code into machine code before program execution. The main difference between AOT and JIT compilation is that native code is not generated during program execution but in advance.

ShuraCore team uses ready-made JIT and AOT compilers to improve the customer’s product. JIT and AOT are an implantation method that will allow you to reach new heights for your software. As part of this collaboration, we will create a generic JIT or AOT compiler implementation concept to use methodologies applied to improve your product and business solution. Our team is interested in creating a unique and successful JIT or AOT compiler for our customers.

AsmJit is a complete JIT and AOT assembler for the C++ language. It can generate native code for x86 and x64 architectures and supports the entire x86/x64 command set from the legacy MMX to the newest AVX512. It has a type-safe API that allows the C++ compiler to perform compile-time semantic checks even before the compiled code is generated or executed.

A lightweight JIT compiler based on MIR (Average Internal Representation).

Cross-platform JIT engine based on Eclipse OMR.

A small JIT engine was originally written by Adobe for Flash.

The MCJIT class is the JIT implementation for LLVM.

  • LibJIT
  • GCC – libgccjit
  • GNU lightning
Currently, the three-tier compiler structure is the most common. Regardless of the exact number of phases in a compiler project, stages can be summarized in one of three phases. The following steps of compilation can be distinguished: front-end, middle-end, and back-end. The front-end is based on syntax and semantics for a specific source or development language. The middle-end performs software optimization and analysis and most often does not depend on the processor architecture. The compiler’s back-end phase is responsible for generating executable code for a specific hardware platform or a specific processor architecture.

Front-end Compilers

The front-end compiler, analyzing the source code, creates an internal representation of the program – an intermediate representation (IR). The front-end consists of three phases: lexical, syntactic, and semantic analysis. Front-end compilers perform the following functions:

  • Character table management;
  • Data structure management;
  • Analyzing the source code and displaying information (location, type, and scope) associated with each character;

For Front-end development, the ShuraCore team applies design patterns, data structures, and generally accepted formal language structures. Tools, technologies, and programming languages that we use when developing Front-end:

  • OpenMP, OpenACC
  • LLVM
  • C++, Rust
  • bison, flex, yacc, AntLR

ShuraCore collaborates with academia, the open-source community, and industry partners to enable us to develop and deliver versatile Front-end compilers to our customers.

Middle-end Compilers (Optimizers and Analyzers)

Middle-end compilers are used to optimize and analyze software source code. The range of compiler analysis and optimization has many functional differences. The scope of a middle-end compiler can range from a function to full software. ShuraCore development team specializes in the following areas of middle-end compilers:

Back-end Compilers

The back-end compiler is responsible for specific optimization for the processor architecture and code generation for a particular architecture. Back-end design is not a trivial process. The design consists of several phases that are performed to form a binary file for the target architecture. When designing a back-end compiler, it is common to distinguish the following steps for the target processor architecture:

  • Defining sets of registers;
  • Description of specialized records;
  • Description of other specialized information about the hardware;
  • Description of the calling agreement;
  • Implementation of lowering the frame;
  • Description of the mechanism for generating the output of instructions;
  • DAG (Directed Acyclic Graph) creation;
  • Description of the planning phase;
  • Generating code for the target architecture and generating a list of machine instructions using Assembler;
  • Creating a binary file of the target architecture;

In developing the back-end compiler, the ShuraCore team uses the LLVM and GCC infrastructure. Our company develops back-end compilers for various processor architectures. We provide a service for the adaptation and creation of back-end compilers for existing processor architectures (RISC-V, ARM, PowerPC, etc.) and developed architectures.

MLIR (Multi-level intermediate representation)

The MLIR (Multilevel Intermediate View) project is a new approach to building a reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain-specific compilers, and merge existing compilers. MLIR is designed for hybrid intermediate representation (IR) to support multiple different requirements in a single infrastructure.

The MLIR project defines a standard IR that brings together the infrastructure needed to run high-performance machine learning models in TensorFlow and similar ML environments. This project includes the application of HPC techniques and the integration of search algorithms such as reinforcement learning. MLIR aims to reduce the cost of new hardware implementation and improve usability for existing TensorFlow users.

MLIR_en

The MLIR (Multilevel Intermediate View) project is a new approach to building a reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain-specific compilers, and merge existing compilers. MLIR is designed for hybrid intermediate representation (IR) to support multiple different requirements in a single infrastructure.

The MLIR project defines a standard IR that brings together the infrastructure needed to run high-performance machine learning models in TensorFlow and similar ML environments. This project includes the application of HPC techniques and the integration of search algorithms such as reinforcement learning. MLIR aims to reduce the cost of new hardware implementation and improve usability for existing TensorFlow users.

MLIR brings together the infrastructure for high-performance ML models in TensorFlow. The TensorFlow ecosystem contains several compilers and optimizers that operate at multiple software and hardware stack levels. We expect the gradual adoption of MLIR to simplify every aspect of this stack. ShuraCore uses the MLIR project to design compilers for the following hardware platforms:

ShuraCore team specializes in using the following frameworks and tools:

  • TensorFlow
  • Caffe
  • PyTorch
  • LLVM
  • CUDA
  • OpenCL

HLS (High-Level Synthesis) Compilers

HLS (High-Level Synthesis) compilers are used to create digital devices using high-level languages. The main goal of HLS products is to simplify the FPGA and ASIC design process. The most common task of the HLS compiler is to generate the HDL group languages (Verilog or VHDL) from the source code of the high-level languages (C/C++).

Many modern implementations of HLS compilers are done using the LLVM framework. High-level synthesis can be created using high-level design languages for a programmable logic controller (PLC), making the IEC 61131 group languages at the output.

The HLS compiler generates various hardware microarchitectures following the specified directives and taking into account the tools used. HLS compilers allow you to find a trade-off between execution speed and hardware complexity.

Our team is ready to develop an HLS compiler for your tasks. ShuraCore constantly monitors trends in this area of compilers and also collaborates with the academic community. ShuraCore uses HLS in processor architectures, develops many attractive FPGA solutions and other products using HLS.

Hardware Compilers (Synthesis Tools)

Hardware compilers, or synthesis tools, are compilers whose output is a description of a hardware configuration instead of a sequence of instructions. The output of these compilers is for hardware. An example of hardware is a field-programmable gate array (FPGA) or structural application-specific integrated circuit (ASIC).

Compilers are called hardware compilers because the source code they compile effectively controls the final hardware configuration. The result of the compilation is the correct interaction of the internal components of the hardware.

Tools, technologies, and programming languages that we use when developing hardware compilers:

  • C++
  • STL, Boost
  • LLVM
  • triSYCL
  • DSL & DSeL

We use the following open source solutions to create bitstreams from HDL files:

Virtual machine

A virtual machine is a software or hardware system that emulates a particular platform’s hardware and executes programs for a target platform on a host platform or virtualizes a specific venue and creates environments that isolate programs and even operating systems from each other. A virtual machine also refers to the specification of some computing environments.
Virtual machines can be used for:

  • Protecting information and limiting the capabilities of programs;
  • Research on the performance of the software or new computer architecture;
  • Emulation of processor architectures and other hardware;
  • Optimizing the use of the resources of mainframes and other powerful computers;
  • Running arbitrary code to control a unified system;
  • Modeling information systems with client-server architecture;
  • Cluster management;
  • Testing and debugging system software;
  • Scanning programs for malware content;

ShuraCore team uses the following tools and programming languages when developing virtual machines:

  • C/C++, Rust
  • WASM
  • LLVM
  • VirtualBox, VMWare, Qemu
  • open-VM-tools

AST and Bytecode interpreters

An interpreter is a translator whose task is to perform line-by-line analysis, process and execute the program’s source code or request. The interpreter has the advantage of stepping through the code without the need for compilation, which can be useful for running it on embedded platforms.

Our company is engaged in porting interpreters for any hardware and software platforms and AST and Bytecode interpreters’ development.

JIT and AOT

JIT (Just-in-time) compilers are used to improve the performance of interpreted programs. JIT compilation is about compiling a program into its code while the program is running. This compilation is also known as dynamic compilation. The advantage of a JIT compilation strategy is that it has complete knowledge of the target architecture on which the

Front-end Compilers

The front-end compiler, analyzing the source code, creates an internal representation of the program – an intermediate representation (IR). The front-end consists of three phases: lexical, syntactic, and semantic analysis. Front-end compilers perform the following functions: Character table management; Data structure management; Analyzing the source code and displaying information (location, type, and scope) associated with

Middle-end Compilers

Middle-end compilers are used to optimize and analyze software source code. The range of compiler analysis and optimization has many functional differences. The scope of a middle-end compiler can range from a function to full software. ShuraCore development team specializes in the following areas of middle-end compilers: Optimizers Analyzers Optimizers Optimizing compilers are the backbone

Back-end Compilers

The back-end compiler is responsible for specific optimization for the processor architecture and code generation for a particular architecture. Back-end design is not a trivial process. The design consists of several phases that are performed to form a binary file for the target architecture. When designing a back-end compiler, it is common to distinguish the

MLIR

The MLIR (Multilevel Intermediate View) project is a new approach to building a reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain-specific compilers, and merge existing compilers. MLIR is designed for hybrid intermediate representation (IR) to support multiple different requirements in

Hardware Compilers

Hardware compilers, or synthesis tools, are compilers whose output is a description of a hardware configuration instead of a sequence of instructions. The output of these compilers is for hardware. An example of hardware is a field-programmable gate array (FPGA) or structural application-specific integrated circuit (ASIC). Compilers are called hardware compilers because the source code

Virtual Machine

A virtual machine is a software or hardware system that emulates a particular platform’s hardware and executes programs for a target platform on a host platform or virtualizes a specific venue and creates environments that isolate programs and even operating systems from each other. A virtual machine also refers to the specification of some computing

AST and bytecode interpreters

An interpreter is a translator whose task is to perform line-by-line analysis, process and execute the program’s source code or request. The interpreter has the advantage of stepping through the code without the need for compilation, which can be useful for running it on embedded platforms. Our company is engaged in porting interpreters for any

HLS Compilers

HLS (High-Level Synthesis) compilers are used to create digital devices using high-level languages. The main goal of HLS products is to simplify the FPGA and ASIC design process. The most common task of the HLS compiler is to generate the HDL group languages (Verilog or VHDL) from the source code of the high-level languages (C/C++). Many modern implementations of HLS compilers

LLVM

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines. The word “LLVM” itself is not an acronym. It is the full name of the project. LLVM and the GNU Compiler Collection (GCC) are compilers. The difference is that

    Contact Us

    I agree with the personal data processing policy and the processing of the site user's data. *