Modular has made the core modules of the standard library for a Mojo programming language based on Python available under an Apache 2 license.
Mojo is emerging as a programming language that provides application developers with an alternative that adds additional capabilities to build higher-performing applications.
Applications written in Mojo will run faster than applications built using the C programming language, Modular CEO Chris Lattner said, making Mojo a better option for building, for example, artificial intelligence (AI) applications.
The overall goal is to build a developer ecosystem around Mojo by opening up revision history for the standard library, releasing nightly builds of the Mojo compiler, providing public continuous integration (CI) tools and allowing external contributions through GitHub pull requests, Lattner added.
Python is popular because it is easy to learn. However, applications written in Python don’t run nearly as fast as applications written in C or Java. Modular created Mojo to provide Python developers with a programming language that used the same familiar syntax to build high-performance applications.
That approach provides the added benefit of providing an alternative to building AI applications using the open source Compute Unified Device Architecture (CUDA) model being advanced by NVIDIA that is optimized for a narrow range of processors, said Lattner. The development of AI applications should be based on a programming language that is being advanced by a broad ecosystem of developers in much the same way other programming languages are maintained, he added.
Lattner, who previously spearheaded the development of the Swift programming language for iOS platforms, said that as processors other than graphical processor units and existing general-purpose CPUs are developed to specifically run AI workloads, a programming language that is not overly tied to a specific hardware platform will prove to be crucial.
It’s not clear when processors optimized for AI workloads will become widely available, but a concerted effort is underway to reduce the current level of dominance NVIDIA has over AI. GPUs are widely used to train AI models because they provide more advanced parallel processing capabilities. However, they were originally developed for graphics workloads. Providers of processors optimized for AI are making a case for an approach designed specifically for AI workloads that are more efficient.
Of course, regardless of processors, the concern is NVIDIA would still maintain its dominance if AI applications continued to be predominantly built using the CUDA framework, said Lattner.
It’s still early days, so far as development of AI applications is concerned, but many data science teams have already shown a clear preference for Python. In addition, developers need to build high-performance Python applications for use cases beyond AI, noted Lattner.
As more AI models are built, the next major challenge will be integrating them into DevOps workflows alongside all the other software artifacts that organizations are building and deploying. The one thing that is certain, however, is that DevOps teams are going to be asked to deploy and maintain even more diverse types of software artifacts.
Image source: Shahadat Rahman on Unsplash