HomeWinBuzzer NewsGoogle Unveils TensorFlow 2.18 with Optimized Workflows for AI Model Development

Google Unveils TensorFlow 2.18 with Optimized Workflows for AI Model Development

TensorFlow 2.18 introduces new features that promise to optimize machine learning workflows, focusing on enhanced compatibility and improved computational efficiency.

-

Google has released TensorFlow 2.18, bringing significant enhancements including full support for NumPy 2.0 and a revised approach to CUDA through Hermetic builds.

TensorFlow is an open-source machine learning framework developed by Google, designed for building and deploying machine learning models, particularly deep learning neural networks. It provides a comprehensive ecosystem of tools, libraries, and community resources that allow researchers and developers to easily create and train ML models, and then deploy them in various environments, from servers to mobile devices.

The updates in TensorFlow 2.18 are aimed at improving developer workflows and computational efficiency in machine learning applications.

Support for NumPy 2.0

TensorFlow 2.18 integrates NumPy 2.0 as the default version, a shift that will affect how developers interact with arrays and matrices. While most APIs should function correctly, some may encounter issues related to out-of-bound conversions and numpy scalar representations due to changes in type promotion rules outlined in NEP 50.

To ease the transition, TensorFlow retains specific behaviors from the previous NumPy 1.x series, ensuring continuity for existing projects. Support for NumPy 1.26 is set to continue until 2025, offering developers time to adapt their codebases to the new standards.

Introducing Hermetic CUDA

In this release, TensorFlow has implemented Hermetic CUDA, eliminating the reliance on local CUDA installations. This approach allows for the downloading of essential libraries like CUDNN and NCCL during the build process, enhancing reproducibility for machine learning projects.

Hermetic CUDA is a specific version of the CUDA toolkit that is downloaded and used within a project, independent of the user’s local CUDA installation. This approach, often used in large-scale machine learning projects, ensures more reproducible builds and consistent behavior across different environments.

The update features dedicated CUDA kernels optimized for GPUs with a compute capability of 8.9, significantly improving performance on NVIDIA hardware such as the RTX 40 series. However, support for older GPUs prior to the Pascal generation (compute capability 6.0) has been dropped, prompting users of legacy systems to consider sticking with TensorFlow version 2.16 or compiling from source.

Changes to TensorFlow Lite’s Development

TensorFlow Lite is transitioning to the LiteRT repository, which indicates a strategic shift in how this lightweight framework will operate. With the discontinuation of TFLite binary releases, developers are encouraged to engage with LiteRT for future updates, fostering a collaborative development environment.

This release arrives amid a highly competitive environment in the machine learning domain. In September 2021, Microsoft launched TensorFlow-DirectML, an open-source project that combines Google’s TensorFlow with Microsoft’s DirectML API, providing enhanced GPU acceleration on Windows systems. This initiative allows developers to harness GPU power from various vendors like AMD and Nvidia.

Jack Huynh of AMD announced in September 2024 the introduction of UDNA, a new GPU architecture that merges RDNA and CDNA frameworks, aimed at improving compatibility across applications. This unified architecture seeks to challenge Nvidia’s CUDA dominance in AI and high-performance computing sectors.

Last Updated on November 7, 2024 2:15 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon