Summary
We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 2.0 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.
Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. Please find the list of the latest stable versions and updates below.
Latest Stable Library Versions (Full List)
TorchArrow 0.1.0 | TorchRec 0.4.0 | TorchVision 0.15 |
TorchAudio 2.0 | TorchServe 0.7.1 | TorchX 0.4.0 |
TorchData 0.6.0 | TorchText 0.15.0 | PyTorch on XLA Devices 1.14 |
*To see prior versions or (unstable) nightlies, click on versions in the top left menu above ‘Search Docs’.
TorchAudio
[Beta] Data augmentation operators
The release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms:
- torchaudio.functional.add_noise
- torchaudio.functional.convolve
- torchaudio.functional.deemphasis
- torchaudio.functional.fftconvolve
- torchaudio.functional.preemphasis
- torchaudio.functional.speed
- torchaudio.transforms.AddNoise
- torchaudio.transforms.Convolve
- torchaudio.transforms.Deemphasis
- torchaudio.transforms.FFTConvolve
- torchaudio.transforms.Preemphasis
- torchaudio.transforms.Speed
- torchaudio.transforms.SpeedPerturbation
The operators can be used to synthetically diversify training data to improve the generalizability of downstream models.
For usage details, please refer to the functional and transform documentation and Audio Data Augmentation tutorial.
[Beta] WavLM and XLS-R models
The release adds two self-supervised learning models for speech and audio.
Besides the model architectures, torchaudio also supports corresponding pre-trained pipelines:
- torchaudio.pipelines.WAVLM_BASE
- torchaudio.pipelines.WAVLM_BASE_PLUS
- torchaudio.pipelines.WAVLM_LARGE
- torchaudio.pipelines.WAV2VEC_XLSR_300M
- torchaudio.pipelines.WAV2VEC_XLSR_1B
- torchaudio.pipelines.WAV2VEC_XLSR_2B
For usage details, please refer to the factory function and pre-trained pipelines documentation.
TorchRL
The initial release of torchrl includes several features that span across the entire RL domain. TorchRL can already be used in online, offline, multi-agent, multi-task and distributed RL settings, among others. See below:
[Beta] Environment wrappers and transforms
torchrl.envs includes several wrappers around common environment libraries. This allows users to swap one library with another without effort. These wrappers build an interface between these simulators and torchrl:
- dm_control:
- Gym
- Brax
- EnvPool
- Jumanji
- Habitat
It also comes with many commonly used transforms and vectorized environment utilities that allow for a fast execution across simulation libraries. Please refer to the documentation for more detail.
[Beta] Datacollectors
Data collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user’s needs. These can be found in torchrl.collectors and are documented here.
[Beta] Objective modules
Several objective functions are included in torchrl.objectives, among which:
- A generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss
- SACLoss and DiscreteSACLoss
- DDPGLoss
- DQNLoss
- REDQLoss
- A2CLoss
- TD3Loss
- ReinforceLoss
- Dreamer
Vectorized value function operators also appear in the library. Check the documentation here.
[Beta] Models and exploration strategies
We provide multiple models, modules and exploration strategies. Get a detailed description in the doc.
[Beta] Composable replay buffer
A composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include:
- Storages (list, physical or memory-based contiguous storages)
- Samplers (Prioritized, sampler without repetition)
- Writers
- Possibility to add transforms
Replay buffers and other data utilities are documented here.
[Beta] Logging tools and trainer
We support multiple logging tools including tensorboard, wandb and mlflow.
We provide a generic Trainer class that allows for easy code recycling and checkpointing.
These features are documented here.
TensorDict
TensorDict is a new data carrier for PyTorch.
[Beta] TensorDict: specialized dictionary for PyTorch
TensorDict allows you to execute many common operations across batches of tensors carried by a single container. TensorDict supports many shape and device or storage operations, and can readily be used in distributed settings. Check the documentation to know more.
[Beta] @tensorclass: a dataclass for PyTorch
Like TensorDict, tensorclass provides the opportunity to write dataclasses with built-in torch features such as shape or device operations.
[Beta] tensordict.nn: specialized modules for TensorDict
The tensordict.nn module provides specialized nn.Module subclasses that make it easy to build arbitrarily complex graphs that can be executed with TensorDict inputs. It is compatible with the latest PyTorch features such as functorch, torch.fx and torch.compile.
TorchRec
[Beta] KeyedJaggedTensor All-to-All Redesign and Input Dist Fusion
We observed performance regression due to a bottleneck in sparse data distribution for models that have multiple, large KJTs to redistribute.
To combat this we altered the comms pattern to transport the minimum data required in the initial collective to support the collective calls for the actual KJT tensor data. This data sent in the initial collective, ‘splits’ means more data is transmitted over the comms stream overall, but the CPU is blocked for significantly shorter amounts of time leading to better overall QPS.
Furthermore, we altered the TorchRec train pipeline to group the initial collective calls for the splits together before launching the more expensive KJT tensor collective calls. This fusion minimizes the CPU blocked time as launching each subsequent input distribution is no longer dependent on the previous input distribution.
With this feature, variable batch sizes are now natively supported across ranks. These features are documented here.
TorchVision
[Beta] Extending TorchVision’s Transforms to Object Detection, Segmentation & Video tasks
TorchVision is extending its Transforms API! Here is what’s new:
- You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.
- You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.
Learn more about these new transforms from our docs, and submit any feedback in our dedicated issue.
TorchText
[Beta] Adding scriptable T5 and Flan-T5 to the TorchText library with incremental decoding support!
TorchText has added the T5 model architecture with pre-trained weights for both the original T5 paper and Flan-T5. The model is fully torchscriptable and features an optimized multiheaded attention implementation. We include several examples of how to utilize the model including summarization, classification, and translation.
For more details, please refer to our docs.
TorchX
TorchX is moving to community supported mode. More details will be coming in at a later time.