MLIR - ShuraCore | Compiler Development Services

MLIR

The MLIR (Multilevel Intermediate View) project is a new approach to building a reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain-specific compilers, and merge existing compilers. MLIR is designed for hybrid intermediate representation (IR) to support multiple different requirements in a single infrastructure.

The MLIR project defines a standard IR that brings together the infrastructure needed to run high-performance machine learning models in TensorFlow and similar ML environments. This project includes the application of HPC techniques and the integration of search algorithms such as reinforcement learning. MLIR aims to reduce the cost of new hardware implementation and improve usability for existing TensorFlow users.

MLIR_en

The MLIR (Multilevel Intermediate View) project is a new approach to building a reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain-specific compilers, and merge existing compilers. MLIR is designed for hybrid intermediate representation (IR) to support multiple different requirements in a single infrastructure.

The MLIR project defines a standard IR that brings together the infrastructure needed to run high-performance machine learning models in TensorFlow and similar ML environments. This project includes the application of HPC techniques and the integration of search algorithms such as reinforcement learning. MLIR aims to reduce the cost of new hardware implementation and improve usability for existing TensorFlow users.

MLIR brings together the infrastructure for high-performance ML models in TensorFlow. The TensorFlow ecosystem contains several compilers and optimizers that operate at multiple software and hardware stack levels. We expect the gradual adoption of MLIR to simplify every aspect of this stack. ShuraCore uses the MLIR project to design compilers for the following hardware platforms:

ShuraCore team specializes in using the following frameworks and tools:

  • TensorFlow
  • Caffe
  • PyTorch
  • LLVM
  • CUDA
  • OpenCL

Compiler Design Services

AsmJit, MIR, nj, NanoJIT, LLVM - MCJIT, Libraries, OpenMP, OpenACC, LLVM, C++, Rust, bison, flex, yacc, AntLR

Compiler Design Services

CPU, FPGA, GPU, TPU, TensorFlow, Caffe, PyTorch, LLVM, CUDA, OpenCL, WASM
Read more

ShuraCore specializes in implementing new and modern ports: GCC, GDB, GNU libraries, Binutils, LLDB, LLVM utilities, and libraries. In addition, we are engaged in the optimization and adaptation of existing compilers for any hardware platform. Finally, the ShuraCore team provides a full range of services for the development of compilers and interpreters.

We also work in the following areas: development of SDK, virtual machines, obfuscators, and code deobfuscators for our clients. We port debuggers and simulators to new hardware platforms, write high-speed optimizations. Our team also develops compilers for neural and tensor processors. ShuraCore creates developer tools based on the LLVM framework.

JIT and AOT

JIT (Just-in-time) compilers are used to improve the performance of interpreted programs.

Front-end Compilers

The front-end compiler, analyzing the source code, creates an internal representation of

Back-end Compilers

The back-end compiler is responsible for specific optimization for the processor architecture

MLIR

The MLIR (Multilevel Intermediate View) project is a new approach to building

Hardware Compilers

Hardware compilers, or synthesis tools, are compilers whose output is a description

HLS Compilers

HLS (High-Level Synthesis) compilers are used to create digital devices using high-level

LLVM

The LLVM Project is a collection of modular and reusable compiler and

    Contact Us

    I agree with the personal data processing policy and the processing of the site user's data. *