Skip to main content

allocate

Allocates an extension into an uninitialized asset (buffer) account. This is useful when the extension data is greater than the transaction size (~1232 bytes), which requires the data to be sent over multiple transactions.

The allocate instruction should be used once for each extension to be added to an asset. Alternatively, it is possible to specify a list of extensions on the create instruction — this is the preferred method when all extension data fits into a single transaction.

Accounts

Below is the list of accounts expected by the allocate instruction.

NameWritableSignerOptionalDescription
assetUninitialized asset account
payerAccount paying for the storage fees
system programSystem Program account

The payer and system program are only required when the account space is not preallocated. In this case, the account will be resized to fit the required extension data.

Arguments

The allocate instruction expects the information of the extension to be added to the asset.

FieldOffsetSizeDescription
extension_type01The type of the extension (a value from the ExtensionType enum).
length14The tota length of the extension as a u32 value.
data5~(optional) Extension data bytes or a slice of it.

The length represents the total length of the extension, even if it is over the transaction size limit. The data is optional and can either represent the complete data for the extension, if the extension data fits on a single transaction, or a partial amount of the data if the extension is larger.

When an extension data needs multiple transaction to be written, the allocate instruction will be followed by one or more write instructions.

Examples

import { allocate, attributes } from '@nifty-oss/asset';

// Accounts:
// - asset: KeypairSigner
// - payer: KeypairSigner
await allocate(umi, {
asset,
payer,
extension: attributes([{ name: 'head', value: 'hat' }]),
}).sendAndConfirm(umi);
tip

For on-chain use, the Nifty Asset crate offers macros that facilitate the manipulation of the extension data to avoid issues with stack/heap size: