torch.Storage¶
torch.Storage
is an alias for the storage class that corresponds with
the default data type (torch.get_default_dtype()
). For instance, if the
default data type is torch.float
, torch.Storage
resolves to
torch.FloatStorage
.
The torch.<type>Storage
and torch.cuda.<type>Storage
classes,
like torch.FloatStorage
, torch.IntStorage
, etc., are not
actually ever instantiated. Calling their constructors creates
a torch.TypedStorage
with the appropriate torch.dtype
and
torch.device
. torch.<type>Storage
classes have all of the
same class methods that torch.TypedStorage
has.
A torch.TypedStorage
is a contiguous, one-dimensional array of
elements of a particular torch.dtype
. It can be given any
torch.dtype
, and the internal data will be interpreted appropriately.
torch.TypedStorage
contains a torch.UntypedStorage
which
holds the data as an untyped array of bytes.
Every strided torch.Tensor
contains a torch.TypedStorage
,
which stores all of the data that the torch.Tensor
views.
Warning
All storage classes except for torch.UntypedStorage
will be removed
in the future, and torch.UntypedStorage
will be used in all cases.
- class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- cuda(device=None, non_blocking=False, **kwargs)[source]¶
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
device (int) – The destination GPU id. Defaults to the current device.
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
- Return type
T
- property device¶
- classmethod from_file(filename, shared=False, size=0) Storage [source]¶
If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.
size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.
- hpu(device=None, non_blocking=False, **kwargs)[source]¶
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
device (int) – The destination HPU id. Defaults to the current device.
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
- Return type
T
- property is_cuda¶
- property is_hpu¶
- is_pinned(device='cuda')[source]¶
Determine whether the CPU TypedStorage is already pinned on device.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
- Returns
A boolean variable.
- is_sparse = False¶
- pin_memory(device='cuda')[source]¶
Copies the CPU TypedStorage to pinned memory, if it’s not already pinned.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A pinned CPU storage.
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
- type(dtype=None, non_blocking=False)[source]¶
Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
dtype (type or string) – The desired type
non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
- Return type
- untyped()[source]¶
Returns the internal
torch.UntypedStorage
- class torch.UntypedStorage(*args, **kwargs)[source]¶
- bfloat16()¶
Casts this storage to bfloat16 type
- bool()¶
Casts this storage to bool type
- byte()¶
Casts this storage to byte type
- byteswap(dtype)¶
Swaps bytes in underlying data
- char()¶
Casts this storage to char type
- clone()¶
Returns a copy of this storage
- complex_double()¶
Casts this storage to complex double type
- complex_float()¶
Casts this storage to complex float type
- copy_()¶
- cpu()¶
Returns a CPU copy of this storage if it’s not already on the CPU
- cuda(device=None, non_blocking=False, **kwargs)¶
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
device (int) – The destination GPU id. Defaults to the current device.
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
- data_ptr()¶
- double()¶
Casts this storage to double type
- element_size()¶
- fill_()¶
- float()¶
Casts this storage to float type
- static from_buffer()¶
- static from_file(filename, shared=False, size=0) Storage ¶
If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.
size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.
- half()¶
Casts this storage to half type
- hpu(device=None, non_blocking=False, **kwargs)¶
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
device (int) – The destination HPU id. Defaults to the current device.
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
- int()¶
Casts this storage to int type
- property is_cuda¶
- property is_hpu¶
- is_pinned(device='cuda')¶
Determine whether the CPU storage is already pinned on device.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A boolean variable.
- long()¶
Casts this storage to long type
- mps()¶
Returns a MPS copy of this storage if it’s not already on the MPS
- nbytes()¶
- new()¶
- pin_memory(device='cuda')¶
Copies the CPU storage to pinned memory, if it’s not already pinned.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A pinned CPU storage.
- resize_()¶
- short()¶
Casts this storage to short type
- tolist()¶
Returns a list containing the elements of this storage
- type(dtype=None, non_blocking=False, **kwargs)¶
Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
dtype (type or string) – The desired type
non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
- untyped()¶
- class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶
- class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]¶