This is the full developer documentation for Algorand Developer Portal
# Algorand Developer Portal
> Everything you need to build solutions powered by the Algorand blockchain network.
Start your journey today
## Become an Algorand Developer
Follow our quick start guide to install Algorand’s developer toolkit and go from zero to deploying your "Hello, world" smart contract in mere minutes using TypeScript or Python pathways.
[Install AlgoKit](getting-started/algokit-quick-start)
### [AlgoKit code tutorials](https://tutorials.dev.algorand.co)
[Step-by-step introduction to Algorand through Utils TypeScript.](https://tutorials.dev.algorand.co)
### [Example gallery](https://examples.dev.algorand.co)
[Explore and launch batteries-included example apps.](https://examples.dev.algorand.co)
### [Connect in Discord](https://discord.gg/algorand)
[Meet other devs and get code support from the community.](https://discord.gg/algorand)
### [Contact the Foundation](https://algorand.co/algorand-foundation/contact)
[Reach out to the team directly with technical inquiries.](https://algorand.co/algorand-foundation/contact)
Join the network
## Run an Algorand node
[Install your node](/nodes/overview/)
Join the Algorand network with a validator node using accessible commodity hardware in a matter of minutes. Experience how easy it is to become a node-runner so you can participate in staking rewards, validate blocks, submit transactions, and read chain data.
# Intro to AlgoKit
AlgoKit is a comprehensive software development kit designed to streamline and accelerate the process of building decentralized applications on the Algorand blockchain. At its core, AlgoKit features a powerful command-line interface (CLI) tool that provides developers with an array of functionalities to simplify blockchain development. Along with the CLI, AlgoKit offers a suite of libraries, templates, and tools that facilitate rapid prototyping and deployment of secure, scalable, and efficient applications. Whether you’re a seasoned blockchain developer or new to the ecosystem, AlgoKit offers everything you need to harness the full potential of Algorand’s impressive tech and innovative consensus algorithm.
[Introduction to AlgoKit](https://www.youtube.com/embed/pojEI-8h0lg?rel=0)
## AlgoKit CLI
[Section titled “AlgoKit CLI”](#algokit-cli)
AlgoKit CLI is a powerful set of command line tools for Algorand developers. Its goal is to help developers build and launch secure, automated, production-ready applications rapidly.
### AlgoKit CLI commands
[Section titled “AlgoKit CLI commands”](#algokit-cli-commands)
Here is the list of commands that you can use with AlgoKit CLI.
* [Bootstrap](/docs/algokit-cli/python/latest/features/project/bootstrap) - Bootstrap AlgoKit project dependencies
* [Compile](/docs/algokit-cli/python/latest/features/compile) - Compile Algorand Python code
* [Completions](/docs/algokit-cli/python/latest/features/completions) - Install shell completions for AlgoKit
* [Deploy](/docs/algokit-cli/python/latest/features/project/deploy) - Deploy your smart contracts effortlessly to various networks
* [Dispenser](/docs/algokit-cli/python/latest/features/dispenser) - Fund your TestNet account with ALGOs from the AlgoKit TestNet Dispenser
* [Doctor](/docs/algokit-cli/python/latest/features/doctor) - Check AlgoKit installation and dependencies
* [Explore](/docs/algokit-cli/python/latest/features/explore) - Explore Algorand Blockchains using lora
* [Generate](/docs/algokit-cli/python/latest/features/generate) - Generate code for an Algorand project
* [Goal](/docs/algokit-cli/python/latest/features/goal) - Run the Algorand goal CLI against the AlgoKit Sandbox
* [Init](/docs/algokit-cli/python/latest/features/init) - Quickly initialize new projects using official Algorand Templates or community provided templates
* [LocalNet](/docs/algokit-cli/python/latest/features/localnet) - Manage a locally sandboxed private Algorand network
* [Project](/docs/algokit-cli/python/latest/features/project) - Perform a variety of AlgoKit project workspace related operations like bootstrapping development environment, deploying smart contracts, running custom commands, and more
* [Task](/docs/algokit-cli/python/latest/features/tasks) - Perform a variety of useful operations like signing & sending transactions, minting ASAs, creating vanity address, and more, on the Algorand blockchain
To learn more about AlgoKit CLI, refer to the following resources:
[AlgoKit CLI Documentation ](/docs/algokit-cli/python/latest/)Learn more about using and configuring AlgoKit CLI
[AlgoKit CLI Repo ](https://github.com/algorandfoundation/algokit-cli)Explore the codebase and contribute to its development
## Algorand Python
[Section titled “Algorand Python”](#algorand-python)
If you are a Python developer, you no longer need to learn a complex smart contract language to write smart contracts.
Algorand Python is a semantically and syntactically compatible, typed Python language that works with standard Python tooling and allows you to write Algorand smart contracts (apps) and logic signatures in Python. Since the code runs on the Algorand virtual machine(AVM), there are limitations and minor differences in behaviors from standard Python, but all code you write with Algorand Python is Python code.
Here is an example of a simple Hello World smart contract written in Algorand Python:
```py
from algopy import ARC4Contract, String, arc4
class HelloWorld(ARC4Contract):
@arc4.abimethod()
def hello(self, name: String) -> String:
return "Hello, " + name + "!"
```
To learn more about Algorand Python, refer to the following resources:
[Algorand Python Documentation ](/concepts/smart-contracts/languages/python/)Learn more about the design and implementation of Algorand Python
[Algorand Python Repo ](https://github.com/algorandfoundation/puya)Explore the codebase and contribute to its development
## Algorand TypeScript
[Section titled “Algorand TypeScript”](#algorand-typescript)
If you are a TypeScript developer, you no longer need to learn a complex smart contract language to write smart contracts.
Algorand TypeScript is a semantically and syntactically compatible, typed TypeScript language that works with standard TypeScript tooling and allows you to write Algorand smart contracts (apps) and logic signatures in TypeScript. Since the code runs on the Algorand virtual machine(AVM), there are limitations and minor differences in behaviors from standard TypeScript, but all code you write with Algorand TypeScript is TypeScript code.
Here is an example of a simple Hello World smart contract written in Algorand TypeScript:
```ts
import { Contract } from '@algorandfoundation/algorand-typescript';
export class HelloWorld extends Contract {
hello(name: string): string {
return `Hello, ${name}`;
}
}
```
To learn more about Algorand TypeScript, refer to the following resources:
[Algorand TypeScript Documentation ](/concepts/smart-contracts/languages/typescript/)Learn more about the design and implementation of Algorand TypeScript
[Algorand TypeScript Repo ](https://github.com/algorandfoundation/puya-ts)Explore the codebase and contribute to its development
## AlgoKit Utils
[Section titled “AlgoKit Utils”](#algokit-utils)
AlgoKit Utils is a utility library recommended for you to use for all chain interactions like sending transactions, creating tokens(ASAs), calling smart contracts, and reading blockchain records. The goal of this library is to provide intuitive, productive utility functions that make it easier, quicker, and safer to build applications on Algorand. Largely, these functions wrap the underlying Algorand SDK but provide a higher-level interface with sensible defaults and capabilities for common tasks.
AlgoKit Utils is available in TypeScript and Python.
### Capabilities
[Section titled “Capabilities”](#capabilities)
The library helps you interact with and develop against the Algorand blockchain with a series of end-to-end capabilities as described below:
* [**AlgorandClient**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/algorand-client.md) - The key entrypoint to the AlgoKit Utils functionality
* Core capabilities
* [**Client management**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/client.md) - Creation of (auto-retry) algod, indexer and kmd clients against various networks resolved from environment or specified configuration
* [**Account management**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/account.md) - Creation and use of accounts including mnemonic, rekeyed, multisig, transaction signer ([useWallet](https://github.com/TxnLab/use-wallet) for dApps and Atomic Transaction Composer compatible signers), idempotent KMD accounts and environment variable injected
* [**Algo amount handling**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/amount.md) - Reliable and terse specification of microAlgo and Algo amounts and conversion between them
* [**Transaction management**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/transaction.md) - Ability to send single, grouped or Atomic Transaction Composer transactions with consistent and highly configurable semantics, including configurable control of transaction notes (including ARC-0002), logging, fees, multiple sender account types, and sending behavior
* Higher-order use cases
* [**App management**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app.md) - Creation, updating, deleting, calling (ABI and otherwise) smart contract apps and the metadata associated with them (including state and boxes)
* [**App deployment**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-deploy.md) - Idempotent (safely retryable) deployment of an app, including deploy-time immutability and permanence control and TEAL template substitution
* [**ARC-0032 Application Spec client**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client.md) - Builds on top of the App management and App deployment capabilities to provide a high productivity application client that works with ARC-0032 application spec defined smart contracts (e.g. via Beaker)
* [**Algo transfers**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/transfer.md) - Ability to easily initiate algo transfers between accounts, including dispenser management and idempotent account funding
* [**Automated testing**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/testing.md) - Terse, robust automated testing primitives that work across any testing framework (including jest and vitest) to facilitate fixture management, quickly generating isolated and funded test accounts, transaction logging, indexer wait management and log capture
* [**Indexer lookups / searching**](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/indexer.md) - Type-safe indexer API wrappers (no more `Record` pain), including automatic pagination control
To learn more about AlgoKit Utils, refer to the following resources:
[Algorand Utils Typescript Documentation ](/docs/algokit-utils/typescript/latest/)Learn more about the design and implementation of Algorand Utils
[Algorand Utils Typescript Repo ](https://github.com/algorandfoundation/algokit-utils-ts#algokit-typescript-utilities)Explore the codebase and contribute to its development
[Algorand Utils Python Documentation ](/docs/algokit-utils/python/latest/)Learn more about the design and implementation of Algorand Utils
[Algorand Utils Python Repo ](https://github.com/algorandfoundation/algokit-utils-py#readme)Explore the codebase and contribute to its development
[Introduction to Algokit Utils](https://www.youtube.com/embed/AkUj1GgcMig?rel=0)
## AlgoKit LocalNet
[Section titled “AlgoKit LocalNet”](#algokit-localnet)
The AlgoKit LocalNet feature allows you to manage (start, stop, reset, manage) a locally sandboxed private Algorand network. This allows you to interact with and deploy changes against your own Algorand network without needing to worry about funding TestNet accounts, whether the information you submit is publicly visible, or whether you are connected to an active Internet connection (once the network has been started).
AlgoKit LocalNet uses Docker images optimized for a great developer experience. This means the Docker images are small and start fast. It also means that features suited to developers are enabled, such as KMD (so you can programmatically get faucet private keys).
To learn more about AlgoKit Localnet, refer to the following resources:
[AlgoKit Localnet Documentation ](/docs/algokit-cli/python/latest/features/localnet)Learn more about using and configuring AlgoKit Localnet
[AlgoKit Localnet GitHub Repository ](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/localnet.md)Explore the source code and technical implementation details
## AVM Debugger
[Section titled “AVM Debugger”](#avm-debugger)
The AlgoKit AVM VS Code debugger extension provides a convenient way to debug any Algorand Smart Contracts written in TEAL.
To learn more about the AVM debugger, refer to the following resources:
[AVM Debugger Documentation ](/algokit/avm-debugger)Learn more about using and configuring the AVM Debugger
[AVM Debugger Extension Repo ](https://marketplace.visualstudio.com/items?itemName=AlgorandFoundation.algokit-avm-vscode-debugger)Explore the AVM Debugger codebase and contribute to its development
## Language Servers
[Section titled “Language Servers”](#language-servers)
The Algorand VS Code Language Extensions provide developers with enhanced capabilities to build Algorand smart contracts efficiently within Visual Studio Code. Designed to work alongside the standard Python and TypeScript language servers, these extensions extend core IDE functionality by adding Algorand-specific diagnostics, validation, and intelligent code actions. The [Python extension](https://marketplace.visualstudio.com/items?itemName=AlgorandFoundation.algorand-python-vscode) integrates seamlessly with the official Python extension and automatically detects the PuyaPy environment to offer real-time contract-aware analysis and quick fixes, helping developers catch errors early and improve code quality. Similarly, the [TypeScript extension](https://marketplace.visualstudio.com/items?itemName=AlgorandFoundation.algorand-typescript-vscode) supports Algorand’s specialized TypeScript and smart contract utilities, providing targeted diagnostics and validation in a familiar developer workflow. Both extensions simplify the development process by offering immediate feedback relevant to Algorand’s unique blockchain environment, accelerating learning and reducing common mistakes. Currently in beta, they require Visual Studio Code 1.80.0 or later and are designed to complement existing language tooling, making them essential tools for any developer working on Algorand smart contracts.
[Algokit TypeScript Language Server ](/algokit/language-servers/algorand-typescript)Learn more about the TypeScript Language Server
[Algokit Python Langauge Server ](/algokit/language-servers/algorand-python)Learn more about the Python Language Server
## Client Generator
[Section titled “Client Generator”](#client-generator)
The client generator generates a type-safe smart contract client for the Algorand Blockchain that wraps the application client in AlgoKit Utils and tailors it to a specific smart contract. It does this by reading an ARC-0032 application spec file and generating a client that exposes methods for each ABI method in the target smart contract, along with helpers to create, update, and delete the application.
To learn more about the client generator, refer to the following resources:
[Client Generator TypeScript Documentation ](/algokit/client-generator/typescript)Learn more about the TypeScript client generator for Algorand smart contracts
[Client Generator TypeScript Repo ](https://github.com/algorandfoundation/algokit-client-generator-ts)Explore the TypeScript client generator codebase and contribute to its development
[Client Generator Python Documentation ](/algokit/client-generator/python)Learn more about the Python client generator for Algorand smart contracts
[Client Generator Python Repo ](https://github.com/algorandfoundation/algokit-client-generator-py)Explore the Python client generator codebase and contribute to its development
## Testnet Dispenser
[Section titled “Testnet Dispenser”](#testnet-dispenser)
The AlgoKit TestNet Dispenser API provides functionalities to interact with the Dispenser service. This service enables users to fund and refund assets.
To learn more about the testnet dispenser, refer to the following resources:
[Testnet Dispenser Documentation ](/docs/algokit-utils/typescript/latest/concepts/advanced/dispenser-client)Learn more about using and configuring the AlgoKit TestNet Dispenser
[Testnet Dispenser Repo ](https://github.com/algorandfoundation/algokit/blob/main/docs/testnet_api.md)Explore the technical implementation and contribute to its development
## AlgoKit Tools and Versions
[Section titled “AlgoKit Tools and Versions”](#algokit-tools-and-versions)
While AlgoKit as a *collection* was bumped to Version 3.0 on March 26, 2025, it is important to note that the individual tools in the kit are on different package version numbers. In the future this may be changed to epoch versioning so that it is clear that all packages are part of the same epoch release.
| Tool | Repository | AlgoKit 3.0 Min Version |
| ------------------------------------------ | ------------------------------- | ----------------------- |
| Command Line Interface (CLI) | algokit-cli | 2.6.0 |
| Utils (Python) | algokit-utils-py | 4.0.0 |
| Utils (TypeScript) | algokit-utils-ts | 9.0.0 |
| Client Generator (Python) | algokit-client-generator-py | 2.1.0 |
| Client Generator (TypeScript) | algokit-client-generator-ts | 5.0.0 |
| Subscriber (Python) | algokit-subscriber-py | 1.0.0 |
| Subscriber (TypeScript) | algokit-subscriber-ts | 3.2.0 |
| Puya Compiler | puya | 4.5.3 |
| Puya Compiler, TypeScript | puya-ts | 1.0.0-beta.58 |
| AVM Unit Testing (Python) | algorand-python-testing | 0.5.0 |
| AVM Unit Testing (TypeScript) | algorand-typescript-testing | 1.0.0-beta.30 |
| Lora the Explorer | algokit-lora | 1.2.0 |
| AVM VSCode Debugger | algokit-avm-vscode-debugger | 1.1.5 |
| Utils Add-On for TypeScript Debugging | algokit-utils-ts-debug | 1.0.4 |
| Base Project Template | algokit-base-template | 1.1.0 |
| Python Smart Contract Project Template | algokit-python-template | 1.6.0 |
| TypeScript Smart Contract Project Template | algokit-typescript-template | 0.3.1 |
| React Vite Frontend Project Template | algokit-react-frontend-template | 1.1.1 |
| Fullstack Project Template | algokit-fullstack-template | 2.1.4 |
## Install
[Section titled “Install”](#install)
Note
Refer to [Troubleshooting](#troubleshooting) for more details on mitigation of known edge cases when installing AlgoKit.
### Prerequisites
[Section titled “Prerequisites”](#prerequisites)
The installation pre-requisites change depending on the method you use to install. Please refer to [Installation Methods](#installation-methods).
Depending on the features you choose to leverage from the AlgoKit CLI, additional dependencies may be required. The AlgoKit CLI will tell you if you are missing one for a given command. These optional dependencies are:
* **Git**: Essential for creating and updating projects from templates. Installation guide available at [Git Installation](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
* **Docker**: Necessary for running the AlgoKit LocalNet environment. Docker Compose version 2.5.0 or higher is required. See [Docker Installation](https://docs.docker.com/get-docker/).
* **Python**: For those installing the AlgoKit CLI via `pipx` or building contracts using Algorand Python. **Minimum required version is Python 3.12+ when working with Algorand Python**. See [Python Installation](https://www.python.org/downloads/).
* **Node.js**: For those working on frontend templates or building contracts using Algorand TypeScript or TEALScript. **Minimum required versions are Node.js `v22` and npm `v10`**. See [Node.js Installation](https://nodejs.org/en/download/).
Note
If you have previously installed AlgoKit using `pipx` and would like to switch to a different installation method, please ensure that you first uninstall the existing version by running `pipx uninstall algokit`. Once uninstalled, you can follow the installation instructions for your preferred platform.
### Cross-platform installation
[Section titled “Cross-platform installation”](#cross-platform-installation)
AlgoKit can be installed using OS specific package managers, or using the python tool [pipx](https://pypa.github.io/pipx/). See below for specific installation instructions.
#### Installation Methods
[Section titled “Installation Methods”](#installation-methods)
* [Windows](#install-algokit-on-windows)
* [Mac](#install-algokit-on-mac)
* [Linux](#install-algokit-on-linux)
* [Universal via pipx](#install-algokit-with-pipx-on-any-os)
### Install AlgoKit on Windows
[Section titled “Install AlgoKit on Windows”](#install-algokit-on-windows)
Note
AlgoKit is supported on Windows 10 1709 (build 16299) and later. We only publish an x64 binary, however it also runs on ARM devices by default using the built in x64 emulation feature.
1. Ensure prerequisites are installed
* [WinGet](https://learn.microsoft.com/en-us/windows/package-manager/winget/) (should be installed by default on recent Windows 10 or later)
* [Git](https://github.com/git-guides/install-git#install-git-on-windows) (or `winget install git.git`)
* [Docker](https://docs.docker.com/desktop/install/windows-install/) (or `winget install docker.dockerdesktop`)
Note
See [our LocalNet documentation](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/localnet.md#prerequisites) for more tips on installing Docker on Windows
* [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
2. Install using winget
```shell
winget install algokit
```
3. [Verify installation](#verify-installation)
#### Maintenance
[Section titled “Maintenance”](#maintenance)
Some useful commands for updating or removing AlgoKit in the future.
* To update AlgoKit: `winget upgrade algokit`
* To remove AlgoKit: `winget uninstall algokit`
### Install AlgoKit on Mac
[Section titled “Install AlgoKit on Mac”](#install-algokit-on-mac)
Note
AlgoKit is supported on macOS Big Sur (11) and later for both x64 and ARM (Apple Silicon)
1. Ensure prerequisites are installed
* [Homebrew](https://docs.brew.sh/Installation)
* [Git](https://github.com/git-guides/install-git#install-git-on-mac) (should already be available if `brew` is installed)
* [Docker](https://docs.docker.com/desktop/install/mac-install/), (or `brew install --cask docker`)
Note
Docker requires MacOS 11+
1. Install using Homebrew
```shell
brew install algorandfoundation/tap/algokit
```
2. Restart the terminal to ensure AlgoKit is available on the path
3. [Verify installation](#verify-installation)
#### Maintenance
[Section titled “Maintenance”](#maintenance-1)
Some useful commands for updating or removing AlgoKit in the future.
* To update AlgoKit: `brew upgrade algokit`
* To remove AlgoKit: `brew uninstall algokit`
### Install AlgoKit on Linux
[Section titled “Install AlgoKit on Linux”](#install-algokit-on-linux)
Note
AlgoKit is compatible with Ubuntu 16.04 and later, Debian, RedHat, and any distribution that supports [Snap](https://snapcraft.io/docs/installing-snapd), but it is only supported on x64 architecture; ARM is not supported.
1. Ensure prerequisites are installed
* [Snap](https://snapcraft.io/docs/installing-snapd) (should be installed by default on Ubuntu 16.04.4 LTS (Xenial Xerus) or later)
* [Git](https://github.com/git-guides/install-git#install-git-on-linux)
* [Docker](https://docs.docker.com/desktop/install/linux-install/)
1. Install using snap
```shell
sudo snap install algokit --classic
```
> For detailed guidelines per each supported linux distro, refer to [Snap Store](https://snapcraft.io/algokit).
2. [Verify installation](#verify-installation)
#### Maintenance
[Section titled “Maintenance”](#maintenance-2)
Some useful commands for updating or removing AlgoKit in the future.
* To update AlgoKit: `snap refresh algokit`
* To remove AlgoKit: `snap remove --purge algokit`
### Install AlgoKit with pipx on any OS
[Section titled “Install AlgoKit with pipx on any OS”](#install-algokit-with-pipx-on-any-os)
1. Ensure desired prerequisites are installed
* [Python 3.10+](https://www.python.org/downloads/)
* [pipx](https://pypa.github.io/pipx/installation/)
* [Git](https://github.com/git-guides/install-git)
* [Docker](https://docs.docker.com/get-docker/)
1. Install using pipx
```shell
pipx install algokit
```
2. Restart the terminal to ensure AlgoKit is available on the path
3. [Verify installation](#verify-installation)
#### Maintenance
[Section titled “Maintenance”](#maintenance-3)
Some useful commands for updating or removing AlgoKit in the future.
* To update AlgoKit: `pipx upgrade algokit`
* To remove AlgoKit: `pipx uninstall algokit`
### Verify installation
[Section titled “Verify installation”](#verify-installation)
Verify AlgoKit is installed correctly by running `algokit --version` and you should see output similar to:
```plaintext
algokit, version 1.0.1
```
Note
If you get receive one of the following errors:
* `command not found: algokit` (bash/zsh)
* `The term 'algokit' is not recognized as the name of a cmdlet, function, script file, or operable program.` (PowerShell)
Then ensure that `algokit` is available on the PATH by running `pipx ensurepath` and restarting the terminal.
It is also recommended that you run `algokit doctor` to verify there are no issues in your local environment and to diagnose any problems if you do have difficulties running AlgoKit. The output of this command will look similar to:
```plaintext
timestamp: 2023-03-27T01:23:45+00:00
AlgoKit: 1.0.1
AlgoKit Python: 3.11.1 (main, Dec 23 2022, 09:28:24) [Clang 14.0.0 (clang-1400.0.29.202)] (location: /Users/algokit/.local/pipx/venvs/algokit)
OS: macOS-13.1-arm64-arm-64bit
docker: 20.10.21
docker compose: 2.13.0
git: 2.37.1
python: 3.10.9 (location: /opt/homebrew/bin/python)
python3: 3.10.9 (location: /opt/homebrew/bin/python3)
pipx: 1.1.0
poetry: 1.3.2
node: 18.12.1
npm: 8.19.2
brew: 3.6.18
If you are experiencing a problem with AlgoKit, feel free to submit an issue via:
https://github.com/algorandfoundation/algokit-cli/issues/new
Please include this output, if you want to populate this message in your clipboard, run `algokit doctor -c`
```
Per the above output, the doctor command output is a helpful tool if you need to ask for support or [raise an issue](https://github.com/algorandfoundation/algokit-cli/issues/new).
### Troubleshooting
[Section titled “Troubleshooting”](#troubleshooting)
This section addresses specific edge cases and issues that some users might encounter when interacting with the CLI. The following table provides solutions to known edge cases:
| Issue Description | OS(s) with observed behaviour | Steps to mitigate | References |
| --------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| This scenario may arise if installed `python` was build without `--with-ssl` flag enabled, causing pip to fail when trying to install dependencies. | Debian 12 | Run `sudo apt-get install -y libssl-dev` to install the required openssl dependency. Afterwards, ensure to reinstall python with `--with-ssl` flag enabled. This includes options like [building python from source code](https://medium.com/@enahwe/how-to-06bc8a042345) or using tools like [pyenv](https://github.com/pyenv/pyenv). | [GitHub Issue](https://github.com/actions/setup-python/issues/93) |
| `poetry install` invoked directly or via `algokit project bootstrap all` fails on `Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)`. | `MacOS` >=14 using `python` 3.13 installed via `homebrew` | Install dependencies deprecated in `3.13` and latest MacOS versions via `brew install pkg-config`, delete the virtual environment folder and retry the `poetry install` command invocation. | N/A |
# AVM Debugger
> Tutorial on how to debug a smart contract using AVM Debugger
The AVM VSCode debugger enables inspection of blockchain logic through `Simulate Traces` - JSON files containing detailed transaction execution data without on-chain deployment. The extension requires both trace files and source maps that link original code (TEAL or Puya) to compiled instructions. While the extension works independently, projects created with algokit templates include utilities that automatically generate these debugging artifacts. For full list of available capabilities of debugger extension refer to this [documentation](https://github.com/microsoft/vscode-mock-debug).
This tutorial demonstrates the workflow using a Python-based Algorand project. We will walk through identifying and fixing a bug in an Algorand smart contract using the Algorand Virtual Machine (AVM) Debugger. We’ll start with a simple, smart contract containing a deliberate bug, and by using the AVM Debugger, we’ll pinpoint and fix the issue. This guide will walk you through setting up, debugging, and fixing a smart contract using this extension.
[Debugging with AlgoKit 3.0](https://www.youtube.com/embed/yPWRlmmTSHA?rel=0)
## Prerequisites
[Section titled “Prerequisites”](#prerequisites)
* Visual Studio Code (version 1.80.0 or higher)
* Node.js (version 18.x or higher)
* [algokit-cli](/algokit/algokit-intro) installed
* [Algokit AVM VSCode Debugger](https://github.com/microsoft/vscode-mock-debug) extension installed
* Basic understanding of [Algorand smart contracts using Python](/concepts/smart-contracts/languages/python)
Note
The extension is designed to debug both raw TEAL sourcemaps and sourcemaps generated via Puya compiler on the Algorand Virtual Machine. It provides a step-by-step debugging experience by utilizing transaction execution traces and compiled source maps of your smart contract.
## Step 1: Setup the Debugging Environment
[Section titled “Step 1: Setup the Debugging Environment”](#step-1-setup-the-debugging-environment)
Install the Algokit AVM VSCode Debugger extension from the VSCode Marketplace by going to extensions in VSCode, then search for Algokit AVM Debugger and click install. You should see the output like the following:

## Step 2: Set Up the Example Smart Contract
[Section titled “Step 2: Set Up the Example Smart Contract”](#step-2-set-up-the-example-smart-contract)
We aim to debug smart contract code in a project generated via `algokit init`. Refer to set up [Algokit](/algokit/algokit-intro). Here’s the Algorand Python code for an `tictactoe` smart contract. The bug is in the `move` method, where `games_played` is updated by `2` for guest and `1` for host (which should be updated by 1 for both guest and host).
Remove `hello_world` folder Create a new tic tac toe smart contract starter via `algokit generate smart-contract -a contract_name "TicTacToe"` Replace the content of `contract.py` with the code below.
* Python
```py
# pyright: reportMissingModuleSource=false
from typing import Literal, Tuple, TypeAlias
from algopy import (
ARC4Contract,
BoxMap,
Global,
LocalState,
OnCompleteAction,
Txn,
UInt64,
arc4,
gtxn,
itxn,
op,
subroutine,
urange,
)
Board: TypeAlias = arc4.StaticArray[arc4.Byte, Literal[9]]
HOST_MARK = 1
GUEST_MARK = 2
class GameState(arc4.Struct, kw_only=True):
board: Board
host: arc4.Address
guest: arc4.Address
is_over: arc4.Bool
turns: arc4.UInt8
class TicTacToe(ARC4Contract):
def __init__(self) -> None:
self.id_counter = UInt64(0)
self.games_played = LocalState(UInt64)
self.games_won = LocalState(UInt64)
self.games = BoxMap(UInt64, GameState)
@subroutine
def opt_in(self) -> None:
self.games_played[Txn.sender] = UInt64(0)
self.games_won[Txn.sender] = UInt64(0)
@arc4.abimethod(allow_actions=[OnCompleteAction.NoOp, OnCompleteAction.OptIn])
def new_game(self, mbr: gtxn.PaymentTransaction) -> UInt64:
if Txn.on_completion == OnCompleteAction.OptIn:
self.opt_in()
self.id_counter += 1
assert mbr.receiver == Global.current_application_address
pre_new_game_box, exists = op.AcctParamsGet.acct_min_balance(
Global.current_application_address
)
assert exists
self.games[self.id_counter] = GameState(
board=arc4.StaticArray[arc4.Byte, Literal[9]].from_bytes(op.bzero(9)),
host=arc4.Address(Txn.sender),
guest=arc4.Address(),
is_over=arc4.Bool(False), # noqa: FBT003
turns=arc4.UInt8(),
)
post_new_game_box, exists = op.AcctParamsGet.acct_min_balance(
Global.current_application_address
)
assert exists
assert mbr.amount == (post_new_game_box - pre_new_game_box)
return self.id_counter
@arc4.abimethod
def delete_game(self, game_id: UInt64) -> None:
game = self.games[game_id].copy()
assert game.guest == arc4.Address() or game.is_over.native
assert Txn.sender == self.games[game_id].host.native
pre_del_box, exists = op.AcctParamsGet.acct_min_balance(
Global.current_application_address
)
assert exists
del self.games[game_id]
post_del_box, exists = op.AcctParamsGet.acct_min_balance(
Global.current_application_address
)
assert exists
itxn.Payment(
receiver=game.host.native, amount=pre_del_box - post_del_box
).submit()
@arc4.abimethod(allow_actions=[OnCompleteAction.NoOp, OnCompleteAction.OptIn])
def join(self, game_id: UInt64) -> None:
if Txn.on_completion == OnCompleteAction.OptIn:
self.opt_in()
assert self.games[game_id].host.native != Txn.sender
assert self.games[game_id].guest == arc4.Address()
self.games[game_id].guest = arc4.Address(Txn.sender)
@arc4.abimethod
def move(self, game_id: UInt64, x: UInt64, y: UInt64) -> None:
game = self.games[game_id].copy()
assert not game.is_over.native
assert game.board[self.coord_to_matrix_index(x, y)] == arc4.Byte()
assert Txn.sender == game.host.native or Txn.sender == game.guest.native
is_host = Txn.sender == game.host.native
if is_host:
assert game.turns.native % 2 == 0
self.games[game_id].board[self.coord_to_matrix_index(x, y)] = arc4.Byte(
HOST_MARK
)
else:
assert game.turns.native % 2 == 1
self.games[game_id].board[self.coord_to_matrix_index(x, y)] = arc4.Byte(
GUEST_MARK
)
self.games[game_id].turns = arc4.UInt8(
self.games[game_id].turns.native + UInt64(1)
)
is_over, is_draw = self.is_game_over(self.games[game_id].board.copy())
if is_over:
self.games[game_id].is_over = arc4.Bool(True)
self.games_played[game.host.native] += UInt64(1)
self.games_played[game.guest.native] += UInt64(2) # incorrect code here
if not is_draw:
winner = game.host if is_host else game.guest
self.games_won[winner.native] += UInt64(1)
@arc4.baremethod(allow_actions=[OnCompleteAction.CloseOut])
def close_out(self) -> None:
pass
@subroutine
def coord_to_matrix_index(self, x: UInt64, y: UInt64) -> UInt64:
return 3 * y + x
@subroutine
def is_game_over(self, board: Board) -> Tuple[bool, bool]:
for i in urange(3):
# Row check
if board[3 * i] == board[3 * i + 1] == board[3 * i + 2] != arc4.Byte():
return True, False
# Column check
if board[i] == board[i + 3] == board[i + 6] != arc4.Byte():
return True, False
# Diagonal check
if board[0] == board[4] == board[8] != arc4.Byte():
return True, False
if board[2] == board[4] == board[6] != arc4.Byte():
return True, False
# Draw check
if (
board[0]
== board[1]
== board[2]
== board[3]
== board[4]
== board[5]
== board[6]
== board[7]
== board[8]
!= arc4.Byte()
):
return True, True
return False, False
```
Add the below deployment code in `deploy.config` file:
* Python
```py
import logging
import algokit_utils
from algosdk.v2client.algod import AlgodClient
from algosdk.v2client.indexer import IndexerClient
from algokit_utils import (
EnsureBalanceParameters,
TransactionParameters,
ensure_funded,
)
from algokit_utils.beta.algorand_client import AlgorandClient
import base64
import algosdk.abi
from algokit_utils import (
EnsureBalanceParameters,
TransactionParameters,
ensure_funded,
)
from algokit_utils.beta.algorand_client import AlgorandClient
from algokit_utils.beta.client_manager import AlgoSdkClients
from algokit_utils.beta.composer import PayParams
from algosdk.atomic_transaction_composer import TransactionWithSigner
from algosdk.util import algos_to_microalgos
from algosdk.v2client.algod import AlgodClient
from algosdk.v2client.indexer import IndexerClient
logger = logging.getLogger(__name__)
# define deployment behaviour based on supplied app spec
def deploy(
algod_client: AlgodClient,
indexer_client: IndexerClient,
app_spec: algokit_utils.ApplicationSpecification,
deployer: algokit_utils.Account,
) -> None:
from smart_contracts.artifacts.tictactoe.tic_tac_toe_client import (
TicTacToeClient,
)
app_client = TicTacToeClient(
algod_client,
creator=deployer,
indexer_client=indexer_client,
)
app_client.deploy(
on_schema_break=algokit_utils.OnSchemaBreak.AppendApp,
on_update=algokit_utils.OnUpdate.AppendApp,
)
last_game_id = app_client.get_global_state().id_counter
algorand = AlgorandClient.from_clients(AlgoSdkClients(algod_client, indexer_client))
algorand.set_suggested_params_timeout(0)
host = algorand.account.random()
ensure_funded(
algorand.client.algod,
EnsureBalanceParameters(
account_to_fund=host.address,
min_spending_balance_micro_algos=algos_to_microalgos(200_000),
),
)
print(f"balance of host address: ",algod_client.account_info(host.address)["amount"]);
print(f"host address: ",host.address);
ensure_funded(
algorand.client.algod,
EnsureBalanceParameters(
account_to_fund=app_client.app_address,
min_spending_balance_micro_algos=algos_to_microalgos(10_000),
),
)
print(f"app_client address: ",app_client.app_address);
game_id = app_client.opt_in_new_game(
mbr=TransactionWithSigner(
txn=algorand.transactions.payment(
PayParams(
sender=host.address,
receiver=app_client.app_address,
amount=2_500 + 400 * (5 + 8 + 75),
)
),
signer=host.signer,
),
transaction_parameters=TransactionParameters(
signer=host.signer,
sender=host.address,
boxes=[(0, b"games" + (last_game_id + 1).to_bytes(8, "big"))],
),
)
guest = algorand.account.random()
ensure_funded(
algorand.client.algod,
EnsureBalanceParameters(
account_to_fund=guest.address,
min_spending_balance_micro_algos=algos_to_microalgos(10),
),
)
app_client.opt_in_join(
game_id=game_id.return_value,
transaction_parameters=TransactionParameters(
signer=guest.signer,
sender=guest.address,
boxes=[(0, b"games" + game_id.return_value.to_bytes(8, "big"))],
),
)
moves = [
((0, 0), (2, 2)),
((1, 1), (2, 1)),
((0, 2), (2, 0)),
]
for host_move, guest_move in moves:
app_client.move(
game_id=game_id.return_value,
x=host_move[0],
y=host_move[1],
transaction_parameters=TransactionParameters(
signer=host.signer,
sender=host.address,
boxes=[(0, b"games" + game_id.return_value.to_bytes(8, "big"))],
accounts=[guest.address],
),
)
# app_client.join(game_id=game_id.return_value)
app_client.move(
game_id=game_id.return_value,
x=guest_move[0],
y=guest_move[1],
transaction_parameters=TransactionParameters(
signer=guest.signer,
sender=guest.address,
boxes=[(0, b"games" + game_id.return_value.to_bytes(8, "big"))],
accounts=[host.address],
),
)
game_state = algosdk.abi.TupleType(
[
algosdk.abi.ArrayStaticType(algosdk.abi.ByteType(), 9),
algosdk.abi.AddressType(),
algosdk.abi.AddressType(),
algosdk.abi.BoolType(),
algosdk.abi.UintType(8),
]
).decode(
base64.b64decode(
algorand.client.algod.application_box_by_name(
app_client.app_id, box_name=b"games" + game_id.return_value.to_bytes(8, "big")
)["value"]
)
)
assert game_state[3]
```
## Step 3: Compile & Deploy the Smart Contract
[Section titled “Step 3: Compile & Deploy the Smart Contract”](#step-3-compile--deploy-the-smart-contract)
To enable debugging mode and full tracing for each step in the execution, go to `main.py` file and add:
```python
from algokit_utils.config import config
config.configure(debug=True, trace_all=True)
```
For more details, refer to [Debugger](/docs/algokit-utils/python/latest/concepts/advanced/debugging):
Next compile the smart contract using AlgoKit:
```bash
algokit project run build
```
This will generate the following files in artifacts: `approval.teal`, `clear.teal`, `clear.puya.map`, `approval.puya.map` and `arc32.json` files. The `.puya.map` files are result of the execution of puyapy compiler (which project run build command orchestrated and invokes automatically). The compiler has an option called `--output-source-maps` which is enabled by default.
Deploy the smart contract on localnet:
```bash
algokit project deploy localnet
```
This will automatically generate `*.appln.trace.avm.json` files in `debug_traces` folder, `.teal` and `.teal.map` files in sources.
The `.teal.map` files are source maps for TEAL and those are automatically generated every time an app is deployed via `algokit-utils`. Even if the developer is only interested in debugging puya source maps, the teal source maps would also always be available as a backup in case there is a need to fall back to more lower level source map.
### Expected Behavior
[Section titled “Expected Behavior”](#expected-behavior)
The expected behavior is that `games_played` should be updated by `1` for both guest and host
### Bug
[Section titled “Bug”](#bug)
When `move` method is called, `games_played` will get updated incorrectly for guest player.
## Step 4: Start the debugger
[Section titled “Step 4: Start the debugger”](#step-4-start-the-debugger)
In the VSCode, go to run and debug on left side. This will load the compiled smart contract into the debugger. In the run and debug, select debug TEAL via Algokit AVM Debugger. It will ask to select the appropriate `debug_traces` file.
Note
This vscode launch config is pre bundled with the template. And there is also an alternative execution option where a developer needs to open the json file representing the trace they want to debug and click on the debug button on the top right corner (which will appear specifically on trace json files when extension is installed).

Figure: Load Debugger in VSCode
Next it will ask you to select the source map file. Select the `approval.puya.map` file. Which would indicate to the debug extension that you would like to debug the given trace file using Puya sourcemaps, allowing you to step through high level python code. If you need to change the debugger to use TEAL or puya sourcemaps for other frontends such as Typescript, remove the individual record from `.algokit/sources/sources.avm.json` file or run the [debugger commands via VSCode command palette](https://github.com/algorandfoundation/algokit-avm-vscode-debugger#vscode-commands)

## Step 5: Debugging the smart contract
[Section titled “Step 5: Debugging the smart contract”](#step-5-debugging-the-smart-contract)
Let’s now debug the issue:

Enter into the `app_id` of the `transaction_group.json` file. This opens the contract. Set a breakpoint in the `move` method. You can also add additional breakpoints.

On left side, you can see `Program State` which includes `program counter`, `opcode`, `stack`, `scratch space`. In `On-chain State` you will be able to see `global`, `local` and `box` storages for the application id deployed on localnet.
:::note: We have used localnet but the contracts can be deployed on any other network. A trace file is in a sense agnostic of the network in which the trace file was generated in. As long as its a complete simulate trace that contains state, stack and scratch states in the execution trace - debugger will work just fine with those as well. :::
Once you start step operations of debugging, it will get populated according to the contract. Now you can step-into the code.
## Step 6: Analyze the Output
[Section titled “Step 6: Analyze the Output”](#step-6-analyze-the-output)
Observe the `games_played` variable for guest is increased by 2 (incorrectly) whereas for host is increased correctly.

## Step 7: Fix the Bug
[Section titled “Step 7: Fix the Bug”](#step-7-fix-the-bug)
Now that we’ve identified the bug, let’s fix it in our original smart contract in `move` method:
* Python
```py
@arc4.abimethod
def move(self, game_id: UInt64, x: UInt64, y: UInt64) -> None:
game = self.games[game_id].copy()
assert not game.is_over.native
assert game.board[self.coord_to_matrix_index(x, y)] == arc4.Byte()
assert Txn.sender == game.host.native or Txn.sender == game.guest.native
is_host = Txn.sender == game.host.native
if is_host:
assert game.turns.native % 2 == 0
self.games[game_id].board[self.coord_to_matrix_index(x, y)] = arc4.Byte(
HOST_MARK
)
else:
assert game.turns.native % 2 == 1
self.games[game_id].board[self.coord_to_matrix_index(x, y)] = arc4.Byte(
GUEST_MARK
)
self.games[game_id].turns = arc4.UInt8(self.games[game_id].turns.native + UInt64(1))
is_over, is_draw = self.is_game_over(self.games[game_id].board.copy())
if is_over:
self.games[game_id].is_over = arc4.Bool(True)
self.games_played[game.host.native] += UInt64(1)
self.games_played[game.guest.native] += UInt64(1) # changed here
if not is_draw:
winner = game.host if is_host else game.guest
self.games_won[winner.native] += UInt64(1)
```
## Step 8: Re-deploy
[Section titled “Step 8: Re-deploy”](#step-8-re-deploy)
Re-compile and re-deploy the contract using the `step 3`.
## Step 9: Verify again using Debugger
[Section titled “Step 9: Verify again using Debugger”](#step-9-verify-again-using-debugger)
Reset the `sources.avm.json` file, then restart the debugger selecting `approval.puya.source.map` file. Run through `steps 4 to 6` to verify that the `games_played` now updates as expected, confirming the bug has been fixed as seen below.
Note
You can alternatively also use `approval.teal.map` file instead of puya source map - for a lower-level TEAL debugging session. Refer to [Algokit AVM VSCode Debugger commands ](https://github.com/algorandfoundation/algokit-avm-vscode-debugger#vscode-command)via the VSCode command palette to automate clearing or editing the registry file.

## Summary
[Section titled “Summary”](#summary)
In this tutorial, we walked through the process of using the AVM debugger from AlgoKit Python utils to debug an Algorand Smart Contract. We set up a debugging environment, loaded a smart contract with a planted bug, stepped through the execution, and identified the issue. This process can be invaluable when developing and testing smart contracts on the Algorand blockchain. It’s highly recommended to thoroughly test your smart contracts to ensure they function as expected and prevent costly errors in production before deploying them to the main network.
## Next steps
[Section titled “Next steps”](#next-steps)
To learn more, refer to documentation of the [debugger extension](/docs/algokit-utils/python/latest/concepts/advanced/debugging) to learn more about Debugging session.
# Application Client Usage
After using the cli tool to generate an application client you will end up with a TypeScript file containing several type definitions, an application factory class and an application client class that is named after the target smart contract. For example, if the contract name is `HelloWorldApp` then you will end up with `HelloWorldAppFactory` and `HelloWorldAppClient` classes. The contract name will also be used to prefix a number of other types in the generated file which allows you to generate clients for multiple smart contracts in the one project without ambiguous type names.
> !\[NOTE]
>
> If you are confused about when to use the factory vs client the mental model is: use the client if you know the app ID, use the factory if you don’t know the app ID (deferred knowledge or the instance doesn’t exist yet on the blockchain) or you have multiple app IDs
## Creating an application client instance
[Section titled “Creating an application client instance”](#creating-an-application-client-instance)
The first step to using the factory/client is to create an instance, which can be done via the constructor or more easily via an [`AlgorandClient`](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/algorand-client) instance via `algorand.client.get_typed_app_factory()` and `algorand.client.get_typed_app_client()` (see code examples below).
Once you have an instance, if you want an escape hatch to the [underlying untyped `AppClient` / `AppFactory`](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client) you can access them as a property:
```python
# Untyped `AppFactory`
untypedFactory = factory.app_factory
# Untyped `AppClient`
untypedClient = client.app_client
```
### Get a factory
[Section titled “Get a factory”](#get-a-factory)
The [app factory](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client) allows you to create and deploy one or more app instances and to create one or more app clients to interact with those (or other) app instances when you need to create clients for multiple apps.
If you only need a single client for a single, known app then you can skip using the factory and just [use a client](#get-a-client-by-app-id).
```python
# Via AlgorandClient
factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory)
# Or, using the options:
factory_with_optional_params = algorand.client.get_typed_app_factory(
HelloWorldAppFactory,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenName",
compilation_params={
"deletable": True,
"updatable": False,
"deploy_time_params": {
"VALUE": "1",
},
},
version="2.0",
)
# Or via the constructor
factory = new HelloWorldAppFactory({
algorand,
})
# with options:
factory = new HelloWorldAppFactory({
algorand,
default_sender: "DEFAULTSENDERADDRESS",
app_name: "OverriddenName",
compilation_params={
"deletable": True,
"updatable": False,
"deploy_time_params": {
"VALUE": "1",
},
},
version: "2.0",
})
```
### Get a client by app ID
[Section titled “Get a client by app ID”](#get-a-client-by-app-id)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client) can be retrieved by ID.
You can get one by using a previously created app factory, from an `AlgorandClient` instance and using the constructor:
```python
# Via factory
factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory)
client = factory.get_app_client_by_id({ app_id: 123 })
client_with_optional_params = factory.get_app_client_by_id(
app_id=123,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
# Via AlgorandClient
client = algorand.client.get_typed_app_client_by_id(HelloWorldAppClient, app_id=123)
client_with_optional_params = algorand.client.get_typed_app_client_by_id(
HelloWorldAppClient,
app_id=123,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
# Via constructor
client = new HelloWorldAppClient(
algorand=algorand,
app_id=123,
)
client_with_optional_params = new HelloWorldAppClient(
algorand=algorand,
app_id=123,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
```
### Get a client by creator address and name
[Section titled “Get a client by creator address and name”](#get-a-client-by-creator-address-and-name)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client) can be retrieved by looking up apps by name for the given creator address if they were deployed using [AlgoKit deployment conventions](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-deploy).
You can get one by using a previously created app factory:
```python
factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory)
client = factory.get_app_client_by_creator_and_name(creator_address="CREATORADDRESS")
client_with_optional_params = factory.get_app_client_by_creator_and_name(
creator_address="CREATORADDRESS",
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
```
Or you can get one using an `AlgorandClient` instance:
```python
client = algorand.client.get_typed_app_client_by_creator_and_name(
HelloWorldAppClient,
creator_address="CREATORADDRESS",
)
client_with_optional_params = algorand.client.get_typed_app_client_by_creator_and_name(
HelloWorldAppClient,
creator_address="CREATORADDRESS",
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
ignore_cache=True,
# Can also pass in `app_lookup_cache`, `approval_source_map`, and `clear_source_map`
)
```
### Get a client by network
[Section titled “Get a client by network”](#get-a-client-by-network)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client) can be retrieved by network using any included network IDs within the ARC-56 app spec for the current network.
You can get one by using a static method on the app client:
```python
client = HelloWorldAppClient.from_network(algorand)
client_with_optional_params = HelloWorldAppClient.from_network(
algorand,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
```
Or you can get one using an `AlgorandClient` instance:
```python
client = algorand.client.get_typed_app_client_by_network(HelloWorldAppClient)
client_with_optional_params = algorand.client.get_typed_app_client_by_network(
HelloWorldAppClient,
default_sender="DEFAULTSENDERADDRESS",
app_name="OverriddenAppName",
# Can also pass in `approval_source_map`, and `clear_source_map`
)
```
## Deploying a smart contract (create, update, delete, deploy)
[Section titled “Deploying a smart contract (create, update, delete, deploy)”](#deploying-a-smart-contract-create-update-delete-deploy)
The app factory and client will variously include methods for creating (factory), updating (client), and deleting (client) the smart contract based on the presence of relevant on completion actions and call config values in the ARC-32 / ARC-56 application spec file. If a smart contract does not support being updated for instance, then no update methods will be generated in the client.
In addition, the app factory will also include a `deploy` method which will…
* create the application if it doesn’t already exist
* update or recreate the application if it does exist, but differs from the version the client is built on
* recreate the application (and optionally delete the old version) if the deployed version is incompatible with being updated to the client version
* do nothing in the application is already deployed and up to date.
You can find more specifics of this behaviour in the [algokit-utils](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-deploy) docs.
### Create
[Section titled “Create”](#create)
To create an app you need to use the factory. The return value will include a typed client instance for the created app.
```python
factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory)
# Create the application using a bare call
result, client = factory.send.create.bare()
# Pass in some compilation flags
factory.send.create.bare(compilation_params={
"updatable": True,
"deletable": True,
})
# Create the application using a specific on completion action (ie. not a no_op)
factory.send.create.bare(params=CommonAppFactoryCallParams(on_complete=OnApplicationComplete.OptIn))
# Create the application using an ABI method (ie. not a bare call)
factory.send.create.namedCreate(
args=NamedCreateArgs(
arg1=123,
arg2="foo",
),
)
# Pass compilation flags and on completion actions to an ABI create call
factory.send.create.namedCreate({
args=NamedCreateArgs(
arg1=123,
arg2="foo",
), # Note also available as a typed tuple argument
compilation_params={
"updatable": True,
"deletable": True,
},
params=CommonAppFactoryCallParams(on_complete=OnApplicationComplete.OptIn),
})
```
If you want to get a built transaction without sending it you can use `factory.createTransaction.create...` rather than `factory.send.create...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `factory.params.create...`.
### Update and Delete calls
[Section titled “Update and Delete calls”](#update-and-delete-calls)
To create an app you need to use the client.
```python
client = algorand.client.get_typed_app_client_by_id(HelloWorldAppClient, app_id=123)
# Update the application using a bare call
client.send.update.bare()
# Pass in compilation flags
client.send.update.bare(compilation_params={
"updatable": True,
"deletable": False,
})
# Update the application using an ABI method
client.send.update.namedUpdate(
args=NamedUpdateArgs(
arg1=123,
arg2="foo",
),
)
# Pass compilation flags
client.send.update.namedUpdate({
args=NamedUpdateArgs(
arg1=123,
arg2="foo",
),
compilation_params={
"updatable": True,
"deletable": True,
},
params=CommonAppCallParams(on_complete=OnApplicationComplete.OptIn),
)
# Delete the application using a bare call
client.send.delete.bare()
# Delete the application using an ABI method
client.send.delete.namedDelete()
```
If you want to get a built transaction without sending it you can use `client.create_transaction.update...` / `client.create_transaction.delete...` rather than `client.send.update...` / `client.send.delete...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `client.params.update...` / `client.params.delete...`.
### Deploy call
[Section titled “Deploy call”](#deploy-call)
The deploy call will make a create, update, or delete and create, or no call depending on what is required to have the deployed application match the client’s contract version and the configured `on_update` and `on_schema_break` parameters. As such the deploy method allows you to configure arguments for each potential call it may make (via `create_params`, `update_params` and `delete_params`). If the smart contract is not updatable or deletable, those parameters will be omitted.
These params values (`create_params`, `update_params` and `delete_params`) will only allow you to specify valid calls that are defined in the ARC-32 / ARC-56 app spec. You can control what call is made via the `method` parameter in these objects. If it’s left out (or set to `None`) then it will be a bare call, if set to the ABI signature of a call it will perform that ABI call. If there are arguments required for that ABI call then the type of the arguments will automatically populate in intellisense.
```ts
client.deploy({
createParams: {
onComplete: OnApplicationComplete.OptIn,
},
updateParams: {
method: 'named_update(uint64,string)string',
args: {
arg1: 123,
arg2: 'foo',
},
},
// Can leave this out and it will do an argumentless bare call (if that call is allowed)
//deleteParams: {}
allowUpdate: true,
allowDelete: true,
onUpdate: 'update',
onSchemaBreak: 'replace',
});
```
## Opt in and close out
[Section titled “Opt in and close out”](#opt-in-and-close-out)
Methods with an `opt_in` or `close_out` `onCompletionAction` are grouped under properties of the same name within the `send`, `createTransaction` and `params` properties of the client. If the smart contract does not handle one of these on completion actions, it will be omitted.
```python
# Opt in with bare call
client.send.opt_in.bare()
# Opt in with ABI method
client.create_transaction.opt_in.named_opt_in(args=NamedOptInArgs(arg1=123))
# Close out with bare call
client.params.close_out.bare()
# Close out with ABI method
client.send.close_out.named_close_out(args=NamedCloseOutArgs(arg1="foo"))
```
## Clear state
[Section titled “Clear state”](#clear-state)
All clients will have a clear state method which will call the clear state program of the smart contract.
```python
client.send.clear_state()
client.create_transaction.clear_state()
client.params.clear_state()
```
## No-op calls
[Section titled “No-op calls”](#no-op-calls)
The remaining ABI methods which should all have an `on_completion_action` of `OnApplicationComplete.NoOp` will be available on the `send`, `create_transaction` and `params` properties of the client. If a bare no-op call is allowed it will be available via `bare`.
These methods will allow you to optionally pass in `on_complete` and if the method happens to allow other on-completes than no-op these can also be provided (and those methods will also be available via the on-complete sub-property too per above).
```python
# Call an ABI method which takes no args
client.send.some_method()
# Call a no-op bare call
client.create_transaction.bare()
# Call an ABI method, passing args in as a dictionary
client.params.some_other_method({ args: { arg1: 123, arg2: "foo" } })
```
## Method and argument naming
[Section titled “Method and argument naming”](#method-and-argument-naming)
By default, names of names, types and arguments will be transformed to `snake_case` to match Python idiomatic semantics (names of classes would be converted to idiomatic `PascalCase` as per Python conventions). If you want to keep the names the same as what is in the ARC-32 / ARC-56 app spec file then you can pass the `-p` or `--preserve-names` property to the type generator.
### Method name clashes
[Section titled “Method name clashes”](#method-name-clashes)
The ARC-32 / ARC-56 specification allows two methods to have the same name, as long as they have different ABI signatures. On the client these methods will be emitted with a unique name made up of the method’s full signature. Eg. `create_string_uint32_void`.
## ABI arguments
[Section titled “ABI arguments”](#abi-arguments)
Each generated method will accept ABI method call arguments in both a `tuple` and a `dataclass`, so you can use whichever feels more comfortable. The types that are accepted will automatically translate from the specified ABI types in the app spec to an equivalent python type.
```python
# ABI method which takes no args
client.send.no_args_method()
# ABI method with args
client.send.other_method(args=OtherMethodArgs(arg1=123, arg2="foo", arg3=bytes([1, 2, 3, 4])))
# Call an ABI method, passing args in as a tuple
client.send.yet_another_method(args=(1, 2, "foo"))
```
## Structs
[Section titled “Structs”](#structs)
If the method takes a struct as a parameter, or returns a struct as an output then it will automatically be allowed to be passed in and will be returned as the parsed struct object.
## Additional parameters
[Section titled “Additional parameters”](#additional-parameters)
Each ABI method and bare call on the client allows the consumer to provide additional parameters as well as the core method / args / etc. parameters. This models the parameters that are available in the underlying [app factory / client](https://github.com/algorandfoundation/algokit-utils-py/blob/main/docs/markdown/capabilities/app-client).
```python
client.send.some_method(
args=SomeMethodArgs(arg1=123),
# Additional parameters go here
)
client.send.opt_in.bare({
# Additional parameters go here
})
```
## Composing transactions
[Section titled “Composing transactions”](#composing-transactions)
Algorand allows multiple transactions to be composed into a single atomic transaction group to be committed (or rejected) as one.
### Using the fluent composer
[Section titled “Using the fluent composer”](#using-the-fluent-composer)
The client exposes a fluent transaction composer which allows you to build up a group before sending it. The return values will be strongly typed based on the methods you add to the composer.
```python
result = client
.new_group()
.method_one(args=SomeMethodArgs(arg1=123), box_references=["V"])
# Non-ABI transactions can still be added to the group
.add_transaction(
client.app_client.create_transaction.fund_app_account(
FundAppAccountParams(
amount=AlgoAmount.from_micro_algos(5000)
)
)
)
.method_two(args=SomeOtherMethodArgs(arg1="foo"))
.send()
# Strongly typed as the return type of methodOne
result_of_method_one = result.returns[0]
# Strongly typed as the return type of methodTwo
result_of_method_two = result.returns[1]
```
### Manually with the TransactionComposer
[Section titled “Manually with the TransactionComposer”](#manually-with-the-transactioncomposer)
Multiple transactions can also be composed using the `TransactionComposer` class.
```python
result = algorand
.new_group()
.add_app_call_method_call(
client.params.method_one(args=SomeMethodArgs(arg1=123), box_references=["V"])
)
.add_payment(
client.app_client.params.fund_app_account(
FundAppAccountParams(amount=AlgoAmount.from_micro_algos(5000))
)
)
.add_app_call_method_call(client.params.method_two(args=SomeOtherMethodArgs(arg1="foo")))
.send()
# returns will contain a result object for each ABI method call in the transaction group
for (return_value in result.returns) {
print(return_value)
}
```
## State
[Section titled “State”](#state)
You can access local, global and box storage state with any state values that are defined in the ARC-32 / ARC-56 app spec.
You can do this via the `state` property which has 3 sub-properties for the three different kinds of state: `state.global`, `state.local(address)`, `state.box`. Each one then has a series of methods defined for each registered key or map from the app spec.
Maps have a `value(key)` method to get a single value from the map by key and a `getMap()` method to return all box values as a map. Keys have a `{keyName}()` method to get the value for the key and there will also be a `get_all()` method to get an object will all key values.
The properties will return values of the corresponding TypeScript type for the type in the app spec and any structs will be parsed as the struct object.
```python
factory = algorand.client.get_typed_app_factory(Arc56TestFactory, default_sender="SENDER")
result, client = factory.send.create.create_application(
args=[],
compilation_params={"deploy_time_params": {"some_number": 1337}},
)
assert client.state.global_state.global_key() == 1337
assert another_app_client.state.global_state.global_key() == 1338
assert client.state.global_state.global_map.value("foo") == {
foo: 13,
bar: 37,
}
client.appClient.fund_app_account(
FundAppAccountParams(amount=AlgoAmount.from_micro_algos(1_000_000))
)
client.send.opt_in.opt_in_to_application(
args=[],
)
assert client.state.local(defaultSender).local_key() == 1337
assert client.state.local(defaultSender).local_map.value("foo") == "bar"
assert client.state.box.box_key() == "baz"
assert client.state.box.box_map.value({
add: { a: 1, b: 2 },
subtract: { a: 4, b: 3 },
}) == {
sum: 3,
difference: 1,
}
```
# Application Client Usage
After using the cli tool to generate an application client you will end up with a TypeScript file containing several type definitions, an application factory class and an application client class that is named after the target smart contract. For example, if the contract name is `HelloWorldApp` then you will end up with `HelloWorldAppFactory` and `HelloWorldAppClient` classes. The contract name will also be used to prefix a number of other types in the generated file which allows you to generate clients for multiple smart contracts in the one project without ambiguous type names.
> !\[NOTE]
>
> If you are confused about when to use the factory vs client the mental model is: use the client if you know the app ID, use the factory if you don’t know the app ID (deferred knowledge or the instance doesn’t exist yet on the blockchain) or you have multiple app IDs
## Creating an application client instance
[Section titled “Creating an application client instance”](#creating-an-application-client-instance)
The first step to using the factory/client is to create an instance, which can be done via the constructor or more easily via an [`AlgorandClient`](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/algorand-client) instance via `algorand.client.getTypedAppFactory()` and `algorand.client.getTypedAppClient*()` (see code examples below).
Once you have an instance, if you want an escape hatch to the [underlying untyped `AppClient` / `AppFactory`](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client) you can access them as a property:
```typescript
// Untyped `AppFactory`
const untypedFactory = factory.appFactory;
// Untyped `AppClient`
const untypedClient = client.appClient;
```
### Get a factory
[Section titled “Get a factory”](#get-a-factory)
The [app factory](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client) allows you to create and deploy one or more app instances and to create one or more app clients to interact with those (or other) app instances when you need to create clients for multiple apps.
If you only need a single client for a single, known app then you can skip using the factory and just [use a client](#get-a-client-by-app-id).
```typescript
// Via AlgorandClient
const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory);
// Or, using the options:
const factoryWithOptionalParams = algorand.client.getTypedAppFactory(HelloWorldAppFactory, {
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenName',
deletable: true,
updatable: false,
deployTimeParams: {
VALUE: '1',
},
version: '2.0',
});
// Or via the constructor
const factory = new HelloWorldAppFactory({
algorand,
});
// with options:
const factory = new HelloWorldAppFactory({
algorand,
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenName',
deletable: true,
updatable: false,
deployTimeParams: {
VALUE: '1',
},
version: '2.0',
});
```
### Get a client by app ID
[Section titled “Get a client by app ID”](#get-a-client-by-app-id)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client) can be retrieved by ID.
You can get one by using a previously created app factory, from an `AlgorandClient` instance and using the constructor:
```typescript
// Via factory
const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory);
const client = factory.getAppClientById({ appId: 123n });
const clientWithOptionalParams = factory.getAppClientById({
appId: 123n,
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
// Via AlgorandClient
const client = algorand.client.getTypedAppClientById(HelloWorldAppClient, {
appId: 123n,
});
const clientWithOptionalParams = algorand.client.getTypedAppClientById(HelloWorldAppClient, {
appId: 123n,
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
// Via constructor
const client = new HelloWorldAppClient({
algorand,
appId: 123n,
});
const clientWithOptionalParams = new HelloWorldAppClient({
algorand,
appId: 123n,
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
```
### Get a client by creator address and name
[Section titled “Get a client by creator address and name”](#get-a-client-by-creator-address-and-name)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client) can be retrieved by looking up apps by name for the given creator address if they were deployed using [AlgoKit deployment conventions](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-deploy).
You can get one by using a previously created app factory:
```typescript
const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory);
const client = factory.getAppClientByCreatorAndName({ creatorAddress: 'CREATORADDRESS' });
const clientWithOptionalParams = factory.getAppClientByCreatorAndName({
creatorAddress: 'CREATORADDRESS',
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
```
Or you can get one using an `AlgorandClient` instance:
```typescript
const client = algorand.client.getTypedAppClientByCreatorAndName(HelloWorldAppClient, {
creatorAddress: 'CREATORADDRESS',
});
const clientWithOptionalParams = algorand.client.getTypedAppClientByCreatorAndName(
HelloWorldAppClient,
{
creatorAddress: 'CREATORADDRESS',
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
ignoreCache: true,
// Can also pass in `appLookupCache`, `approvalSourceMap`, and `clearSourceMap`
},
);
```
### Get a client by network
[Section titled “Get a client by network”](#get-a-client-by-network)
The typed [app client](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client) can be retrieved by network using any included network IDs within the ARC-56 app spec for the current network.
You can get one by using a static method on the app client:
```typescript
const client = HelloWorldAppClient.fromNetwork({ algorand });
const clientWithOptionalParams = HelloWorldAppClient.fromNetwork({
algorand,
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
```
Or you can get one using an `AlgorandClient` instance:
```typescript
const client = algorand.client.getTypedAppClientByNetwork(HelloWorldAppClient);
const clientWithOptionalParams = algorand.client.getTypedAppClientByNetwork(HelloWorldAppClient, {
defaultSender: 'DEFAULTSENDERADDRESS',
appName: 'OverriddenAppName',
// Can also pass in `approvalSourceMap`, and `clearSourceMap`
});
```
## Deploying a smart contract (create, update, delete, deploy)
[Section titled “Deploying a smart contract (create, update, delete, deploy)”](#deploying-a-smart-contract-create-update-delete-deploy)
The app factory and client will variously include methods for creating (factory), updating (client), and deleting (client) the smart contract based on the presence of relevant on completion actions and call config values in the ARC-32 / ARC-56 application spec file. If a smart contract does not support being updated for instance, then no update methods will be generated in the client.
In addition, the app factory will also include a `deploy` method which will…
* create the application if it doesn’t already exist
* update or recreate the application if it does exist, but differs from the version the client is built on
* recreate the application (and optionally delete the old version) if the deployed version is incompatible with being updated to the client version
* do nothing in the application is already deployed and up to date.
You can find more specifics of this behaviour in the [algokit-utils](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-deploy) docs.
### Create
[Section titled “Create”](#create)
To create an app you need to use the factory. The return value will include a typed client instance for the created app.
```ts
const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory);
// Create the application using a bare call
const { result, appClient: client } = factory.send.create.bare();
// Pass in some compilation flags
factory.send.create.bare({
updatable: true,
deletable: true,
});
// Create the application using a specific on completion action (ie. not a no_op)
factory.send.create.bare({
onComplete: OnApplicationComplete.OptIn,
});
// Create the application using an ABI method (ie. not a bare call)
factory.send.create.namedCreate({
args: {
arg1: 123,
arg2: 'foo',
},
});
// Pass compilation flags and on completion actions to an ABI create call
factory.send.create.namedCreate({
args: {
arg1: 123,
arg2: 'foo',
},
updatable: true,
deletable: true,
onComplete: OnApplicationComplete.OptIn,
});
```
If you want to get a built transaction without sending it you can use `factory.createTransaction.create...` rather than `factory.send.create...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `factory.params.create...`.
### Update and Delete calls
[Section titled “Update and Delete calls”](#update-and-delete-calls)
To create an app you need to use the client.
```ts
const client = algorand.client.getTypedAppClientById(HelloWorldAppClient, {
appId: 123n,
});
// Update the application using a bare call
client.send.update.bare();
// Pass in compilation flags
client.send.update.bare({
updatable: true,
deletable: false,
});
// Update the application using an ABI method
client.send.update.namedUpdate({
args: {
arg1: 123,
arg2: 'foo',
},
});
// Pass compilation flags
client.send.update.namedUpdate({
args: {
arg1: 123,
arg2: 'foo',
},
updatable: true,
deletable: true,
});
// Delete the application using a bare call
client.send.delete.bare();
// Delete the application using an ABI method
client.send.delete.namedDelete();
```
If you want to get a built transaction without sending it you can use `client.createTransaction.update...` / `client.createTransaction.delete...` rather than `client.send.update...` / `client.send.delete...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `client.params.update...` / `client.params.delete...`.
### Deploy call
[Section titled “Deploy call”](#deploy-call)
The deploy call will make a create, update, or delete and create, or no call depending on what is required to have the deployed application match the client’s contract version and the configured `onUpdate` and `onSchemaBreak` parameters. As such the deploy method allows you to configure arguments for each potential call it may make (via `createParams`, `updateParams` and `deleteParams`). If the smart contract is not updatable or deletable, those parameters will be omitted.
These params values (`createParams`, `updateParams` and `deleteParams`) will only allow you to specify valid calls that are defined in the ARC-32 / ARC-56 app spec. You can control what call is made via the `method` parameter in these objects. If it’s left out (or set to `undefined`) then it will be a bare call, if set to the ABI signature of a call it will perform that ABI call. If there are arguments required for that ABI call then the type of the arguments will automatically populate in intellisense.
```ts
client.deploy({
createParams: {
onComplete: OnApplicationComplete.OptIn,
},
updateParams: {
method: 'named_update(uint64,string)string',
args: {
arg1: 123,
arg2: 'foo',
},
},
// Can leave this out and it will do an argumentless bare call (if that call is allowed)
//deleteParams: {}
allowUpdate: true,
allowDelete: true,
onUpdate: 'update',
onSchemaBreak: 'replace',
});
```
## Opt in and close out
[Section titled “Opt in and close out”](#opt-in-and-close-out)
Methods with an `opt_in` or `close_out` `onCompletionAction` are grouped under properties of the same name within the `send`, `createTransaction` and `params` properties of the client. If the smart contract does not handle one of these on completion actions, it will be omitted.
```ts
// Opt in with bare call
client.send.optIn.bare();
// Opt in with ABI method
client.createTransaction.optIn.namedOptIn({ args: { arg1: 123 } });
// Close out with bare call
client.params.closeOut.bare();
// Close out with ABI method
client.send.closeOut.namedCloseOut({ args: { arg1: 'foo' } });
```
## Clear state
[Section titled “Clear state”](#clear-state)
All clients will have a clear state method which will call the clear state program of the smart contract.
```ts
client.send.clearState();
client.createTransaction.clearState();
client.params.clearState();
```
## No-op calls
[Section titled “No-op calls”](#no-op-calls)
The remaining ABI methods which should all have an `onCompletionAction` of `OnApplicationComplete.NoOp` will be available on the `send`, `createTransaction` and `params` properties of the client. If a bare no-op call is allowed it will be available via `bare`.
These methods will allow you to optionally pass in `onComplete` and if the method happens to allow other on-completes than no-op these can also be provided (and those methods will also be available via the on-complete sub-property too per above).
```ts
// Call an ABI method which takes no args
client.send.someMethod();
// Call a no-op bare call
client.createTransaction.bare();
// Call an ABI method, passing args in as a dictionary
client.params.someOtherMethod({ args: { arg1: 123, arg2: 'foo' } });
```
## Method and argument naming
[Section titled “Method and argument naming”](#method-and-argument-naming)
By default, names of names, types and arguments will be transformed to `camelCase` to match TypeScript idiomatic semantics. If you want to keep the names the same as what is in the ARC-32 / ARC-56 app spec file (e.g. `snake_case` etc.) then you can pass the `-p` or `--preserve-names` property to the type generator.
### Method name clashes
[Section titled “Method name clashes”](#method-name-clashes)
The ARC-32 / ARC-56 specification allows two methods to have the same name, as long as they have different ABI signatures. On the client these methods will be emitted with a unique name made up of the method’s full signature. Eg. createStringUint32Void.
Whilst TypeScript supports method overloading, in practice it would be impossible to reliably resolve the desired overload at run time once you factor in methods with default parameters.
## ABI arguments
[Section titled “ABI arguments”](#abi-arguments)
Each generated method will accept ABI method call arguments in both a tuple and a dictionary format, so you can use whichever feels more comfortable. The types that are accepted will automatically translate from the specified ABI types in the app spec to an equivalent TypeScript type.
```ts
// ABI method which takes no args
client.send.noArgsMethod({ args: {} });
client.send.noArgsMethod({ args: [] });
// ABI method with args
client.send.otherMethod({ args: { arg1: 123, arg2: 'foo', arg3: new Uint8Array([1, 2, 3, 4]) } });
// Call an ABI method, passing args in as a tuple
client.send.yetAnotherMethod({ args: [1, 2, 'foo'] });
```
## Structs
[Section titled “Structs”](#structs)
If the method takes a struct as a parameter, or returns a struct as an output then it will automatically be allowed to be passed in and will be returned as the parsed struct object.
## Additional parameters
[Section titled “Additional parameters”](#additional-parameters)
Each ABI method and bare call on the client allows the consumer to provide additional parameters as well as the core method / args / etc. parameters. This models the parameters that are available in the underlying [app factory / client](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-client).
```ts
client.send.someMethod({
args: {
arg1: 123,
},
/* Additional parameters go here */
});
client.send.optIn.bare({
/* Additional parameters go here */
});
```
## Composing transactions
[Section titled “Composing transactions”](#composing-transactions)
Algorand allows multiple transactions to be composed into a single atomic transaction group to be committed (or rejected) as one.
### Using the fluent composer
[Section titled “Using the fluent composer”](#using-the-fluent-composer)
The client exposes a fluent transaction composer which allows you to build up a group before sending it. The return values will be strongly typed based on the methods you add to the composer.
```ts
const result = await client
.newGroup()
.methodOne({ args: { arg1: 123 }, boxReferences: ['V'] })
// Non-ABI transactions can still be added to the group
.addTransaction(client.appClient.createTransaction.fundAppAccount({ amount: (5000).microAlgo() }))
.methodTwo({ args: { arg1: 'foo' } })
.execute();
// Strongly typed as the return type of methodOne
const resultOfMethodOne = result.returns[0];
// Strongly typed as the return type of methodTwo
const resultOfMethodTwo = result.returns[1];
```
### Manually with the TransactionComposer
[Section titled “Manually with the TransactionComposer”](#manually-with-the-transactioncomposer)
Multiple transactions can also be composed using the `TransactionComposer` class.
```ts
const result = algorand
.newGroup()
.addAppCallMethodCall(client.params.methodOne({ args: { arg1: 123 }, boxReferences: ['V'] }))
.addPayment(client.appClient.params.fundAppAccount({ amount: (5000).microAlgo() }))
.addAppCallMethodCall(client.params.methodTwo({ args: { arg1: 'foo' } }))
.execute();
// returns will contain a result object for each ABI method call in the transaction group
for (const { returnValue } of result.returns) {
console.log(returnValue);
}
```
## State
[Section titled “State”](#state)
You can access local, global and box storage state with any state values that are defined in the ARC-32 / ARC-56 app spec.
You can do this via the `state` property which has 3 sub-properties for the three different kinds of state: `state.global`, `state.local(address)`, `state.box`. Each one then has a series of methods defined for each registered key or map from the app spec.
Maps have a `value(key)` method to get a single value from the map by key and a `getMap()` method to return all box values as a map. Keys have a `{keyName}()` method to get the value for the key and there will also be a `getAll()` method to get an object will all key values.
The properties will return values of the corresponding TypeScript type for the type in the app spec and any structs will be parsed as the struct object.
```typescript
const factory = algorand.client.getTypedAppFactory(Arc56TestFactory, { defaultSender: 'SENDER' });
const { appClient: client } = await factory.send.create.createApplication({
args: [],
deployTimeParams: { someNumber: 1337n },
});
expect(await client.state.global.globalKey()).toBe(1337n);
expect(await anotherAppClient.state.global.globalKey()).toBe(1338n);
expect(await client.state.global.globalMap.value('foo')).toEqual({ foo: 13n, bar: 37n });
await client.appClient.fundAppAccount({ amount: microAlgos(1_000_000) });
await client.send.optIn.optInToApplication({ args: [], populateAppCallResources: true });
expect(await client.state.local(defaultSender).localKey()).toBe(1337n);
expect(await client.state.local(defaultSender).localMap.value('foo')).toBe('bar');
expect(await client.state.box.boxKey()).toBe('baz');
expect(
await client.state.box.boxMap.value({
add: { a: 1n, b: 2n },
subtract: { a: 4n, b: 3n },
}),
).toEqual({
sum: 3n,
difference: 1n,
});
```
# AlgoKit Templates
> Overview of AlgoKit templates
## Using a Custom AlgoKit Template
[Section titled “Using a Custom AlgoKit Template”](#using-a-custom-algokit-template)
To initialize a community AlgoKit template, you can either provide a URL to the community template during the interactive wizard or by passing in `--template-url` to `algokit init`. For example:
```shell
algokit init --template-url https://github.com/algorandfoundation/algokit-python-template
# This is the url of the official Python template. Replace with the community template URL.
# or
algokit init # and select the Custom Template option
```
When you select the `Custom Template` option during the interactive wizard, you will be prompted to provide the URL of the custom template.
```shell
Community templates have not been reviewed, and can execute arbitrary code.
Please inspect the template repository, and pay particular attention to the values of _tasks, _migrations and _jinja_extensions in copier.yml
Enter a custom project URL, or leave blank and press enter to go back to official template selection.
Note that you can use gh: as a shorthand for github.com and likewise gl: for gitlab.com
Valid examples:
- gh:copier-org/copier
- gl:copier-org/copier
- git@github.com:copier-org/copier.git
- git+https://mywebsiteisagitrepo.example.com/
- /local/path/to/git/repo
- /local/path/to/git/bundle/file.bundle
- ~/path/to/git/repo
- ~/path/to/git/repo.bundle
? Custom template URL: # Enter the URL of the custom template here
```
The `--template-url` option can be combined with `--template-url-ref` to specify a specific commit, branch or tag. For example:
```shell
algokit init --template-url https://github.com/algorandfoundation/algokit-python-templat --template-url-ref 9985005b7389c90c6afed685d75bb8e7608b2a96
```
If the URL is not an official template there is a potential security risk and so to continue you must either acknowledge this prompt, or if you are in a non-interactive environment you can pass the `--UNSAFE-SECURITY-accept-template-url` option (but we generally don’t recommend this option so users can review the warning message first) e.g.
```shell
Community templates have not been reviewed, and can execute arbitrary code.
Please inspect the template repository, and pay particular attention to the values of \_tasks, \_migrations and \_jinja_extensions in copier.yml
? Continue anyway? Yes
```
## Creating Custom AlgoKit Templates
[Section titled “Creating Custom AlgoKit Templates”](#creating-custom-algokit-templates)
If the official templates do not serve your needs, you can create custom AlgoKit templates tailored to your project requirements or industry needs. These custom templates can be used for your future projects or contributed to the Algorand developer community, enhancing the ecosystem with specialized solutions.
Creating templates in AlgoKit involves using various configuration files and a templating engine to generate project structures tailored to your needs. This guide will cover the key concepts and best practices for creating templates in AlgoKit. We will also refer to the official [`algokit-python-template`](https://github.com/algorandfoundation/algokit-python-template) as an example.
### Quick Start
[Section titled “Quick Start”](#quick-start)
For users who are keen on getting started with creating AlgoKit templates, you can follow these quick steps:
1. Click on `Use this template`->`Create a new repository` on [algokit-python-template](https://github.com/algorandfoundation/algokit-python-template) Github page. This will create a new reference repository with clean git history, allowing you to modify and transform the base Python template into your custom template.
2. Modify the cloned template according to your specific needs. The remainder of this tutorial will help you understand expected behaviors from the AlgoKit side, Copier, the templating framework, and key concepts related to the default files you will encounter in the reference template.
### Overview of AlgoKit Templates
[Section titled “Overview of AlgoKit Templates”](#overview-of-algokit-templates)
AlgoKit templates are project scaffolds that can initialize new smart contract projects. These templates can include code files, configuration files, and scripts. AlgoKit uses Copier and the Jinja templating engine to create new projects based on these templates.
#### Copier/Jinja
[Section titled “Copier/Jinja”](#copierjinja)
AlgoKit uses Copier templates. Copier is a library that allows you to create project templates that can be easily replicated and customized. It’s often used along with Jinja. Jinja is a modern and designer-friendly templating engine for Python programming language. It’s used in Copier templates to substitute variables in files and file names. You can find more information in the [Copier documentation](https://copier.readthedocs.io/) and [Jinja documentation](https://jinja.palletsprojects.com/).
#### AlgoKit Functionality with Templates
[Section titled “AlgoKit Functionality with Templates”](#algokit-functionality-with-templates)
AlgoKit provides the `algokit init` command to initialize a new project using a template. You can pass the template name using the `-t` flag or select a template from a list.
### Key Concepts
[Section titled “Key Concepts”](#key-concepts)
#### .algokit.toml
[Section titled “.algokit.toml”](#algokittoml)
This file is the AlgoKit configuration file for this project, and it can be used to specify the minimum version of the AlgoKit. This is essential to ensure that projects created with your template are always compatible with the version of AlgoKit they are using.
Example from `algokit-python-template`:
```toml
[algokit]
min_version = "v1.1.0-beta.4"
```
This specifies that the template requires at least version `v1.1.0-beta.4` of AlgoKit.
#### Python Support: `pyproject.toml`
[Section titled “Python Support: pyproject.toml”](#python-support-pyprojecttoml)
Python projects in AlgoKit can leverage various tools for dependency management and project configuration. While Poetry and the `pyproject.toml` file are common choices, they are not the only options. If you opt to use Poetry, you’ll rely on the pyproject.toml file to define the project’s metadata and dependencies. This configuration file can utilize the Jinja templating syntax for customization.
Example snippet from `algokit-python-template`:
```toml
[tool.poetry]
name = "{{ project_name }}"
version = "0.1.0"
description = "Algorand smart contracts"
authors = ["{{ author_name }} <{{ author_email }}>"]
readme = "README.md"
...
```
This example shows how project metadata and dependencies are defined in `pyproject.toml`, using Jinja syntax to allow placeholders for project metadata.
#### TypeScript Support: `package.json`
[Section titled “TypeScript Support: package.json”](#typescript-support-packagejson)
For TypeScript projects, the `package.json` file plays a similar role as `pyproject.toml` can do for Python projects. It specifies metadata about the project and lists the dependencies required for smart contract development.
Example snippet:
```json
{
"name": "{{ project_name }}",
"version": "1.0.0",
"description": "{{ project_description }}",
"scripts": {
"build": "tsc"
},
"devDependencies": {
"typescript": "^4.2.4",
"tslint": "^6.1.3",
"tslint-config-prettier": "^1.18.0"
}
}
```
This example shows how Jinja syntax is used within `package.json` to allow placeholders for project metadata and dependencies.
#### Bootstrap Option
[Section titled “Bootstrap Option”](#bootstrap-option)
When instantiating your template via AlgoKit CLI, it will optionally prompt the user to automatically run [algokit bootstrap](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/bootstrap.md) after the project is initialized and can perform various setup tasks like installing dependencies or setting up databases.
* `env`: Searches for and copies a `.env*.template` file to an equivalent `.env*` file in the current working directory, prompting for any unspecified values. This feature is integral for securely managing environment variables, preventing sensitive data from inadvertently ending up in version control. By default, Algokit will scan for network-prefixed `.env` variables (e.g., `.env.localnet`), which can be particularly useful when relying on the [Algokit deploy command](https://github.com/algorandfoundation/algokit-cli/blob/deploy-command/docs/features/deploy.md). If no such prefixed files are located, Algokit will then attempt to load default `.env` files. This functionality provides greater flexibility for different network configurations.
* `poetry`: If your Python project uses Poetry for dependency management, the `poetry` command installs Poetry (if not present) and runs `poetry install` in the current working directory to install Python dependencies.
* `npm`: If you’re developing a JavaScript or TypeScript project, the `npm` command runs npm install in the current working directory to install Node.js dependencies.
* `all`: The `all` command runs all the aforementioned bootstrap sub-commands in the current directory and its subdirectories. This command is a comprehensive way to ensure all project dependencies and environment variables are correctly set up.
#### Predefined Copier Answers
[Section titled “Predefined Copier Answers”](#predefined-copier-answers)
Copier can prompt the user for input when initializing a new project, which is then passed to the template as variables. This is useful for customizing the new project based on user input.
Example:
copier.yaml
```yaml
project_name:
type: str
help: What is the name of this project?
placeholder: 'algorand-app'
```
This would prompt the user for the project name, and the input can then be used in the template using the Jinja syntax `{{ project_name }}`.
##### Default Behaviors
[Section titled “Default Behaviors”](#default-behaviors)
When creating an AlgoKit template, there are a few default behaviors that you can expect to be provided by algokit-cli itself without introducing any extra code to your templates:
* **Git**: If Git is installed on the user’s system and the user’s working directory is a Git repository, AlgoKit CLI will commit the newly created project as a new commit in the repository. This feature helps to maintain a clean version history for the project. If you wish to add a specific commit message for this action, you can specify a `commit_message` in the `_commit` option in your `copier.yaml` file.
* **VSCode**: If the user has Visual Studio Code (VSCode) installed and the path to VSCode is added to their system’s PATH, AlgoKit CLI will automatically open the newly created VSCode window unless the user provides specific flags into the init command.
* **Bootstrap**: AlgoKit CLI is equipped to execute a bootstrap script after a project has been initialized. This script, included in AlgoKit templates, can be automatically run to perform various setup tasks, such as installing dependencies or setting up databases. This is managed by AlgoKit CLI and not within the user-created codebase. By default, if a `bootstrap` task is defined in the `copier.yaml`, AlgoKit CLI will execute it unless the user opts out during the prompt.
By combining predefined Copier answers with these default behaviors, you can create a smooth, efficient, and intuitive initialization experience for the users of your template.
#### Executing Python Tasks in Templates
[Section titled “Executing Python Tasks in Templates”](#executing-python-tasks-in-templates)
If you need to use Python scripts as tasks within your Copier templates, ensure that you have Python installed on the host machine. By convention, AlgoKit automatically detects the Python installation on your machine and fills in the `python_path` variable accordingly. This process ensures that any Python scripts included as tasks within your Copier templates will execute using the system’s Python interpreter. It’s important to note that the use of `_copier_python` is not recommended. Here’s an example of specifying a Python script execution in your `copier.yaml` without needing to explicitly use `_copier_python`:
```yaml
- '{{ python_path }} your_python_script.py'
```
If you’d like your template to be backwards compatible with versions of `algokit-cli` older than `v1.11.3` when executing custom python scripts via `copier` tasks, you can use a conditional statement to determine the Python path:
```yaml
- '{{ python_path if python_path else _copier_python }} your_python_script.py'
# _copier_python above is used for backwards compatibility with versions < v1.11.3 of the algokit cli
```
And to define `python_path` in your Copier questions:
```yaml
# Auto determined by algokit-cli from v1.11.3 to allow execution of python script
# in binary mode.
python_path:
type: str
help: Path to the sys.executable.
when: false
```
#### Working with Generators
[Section titled “Working with Generators”](#working-with-generators)
After mastering the use of `copier` and building your templates based on the official AlgoKit template repositories, you can enhance your proficiency by learning to define `custom generators`. Essentially, generators are smaller-scope `copier` templates designed to provide additional functionality after a project has been initialized from the template.
For example, the official [`algokit-python-template`](https://github.com/algorandfoundation/algokit-python-template/tree/main/template_content) incorporates a generator in the `.algokit/generators` directory. This generator can be utilized to execute auxiliary tasks on AlgoKit projects that are initiated from this template, like adding new smart contracts to an existing project. For a comprehensive understanding, please consult the [`architecture decision record`](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/architecture-decisions/2023-07-19_advanced_generate_command.md) and [`algokit generate documentation`](/docs/algokit-cli/python/latest/features/generate).
##### How to Create a Generator
[Section titled “How to Create a Generator”](#how-to-create-a-generator)
Outlined below are the fundamental steps to create a generator. Although `copier` provides complete autonomy in structuring your template, you may prefer to define your generator to meet your specific needs. Nevertheless, as a starting point, we suggest:
1. Generate a new directory hierarchy within your template directory under the `.algokit/generators` folder (this is merely a suggestion; you can define your custom path if necessary and point to it via the algokit.toml file).
2. Develop a `copier.yaml` file within the generator directory and outline the generator’s behavior. This file bears similarities with the root `copier.yaml` file in your template directory, but it is exclusively for the generator. The `tasks` section of the `copier.yaml` file is where you can determine the generator’s behavior. Here’s an example of a generator that copies the `smart-contract` directory from the template to the current working directory:
```yaml
_task:
- "echo '==== Successfully initialized new smart contract 🚀 ===='"
contract_name:
type: str
help: Name of your new contract.
placeholder: 'my-new-contract'
default: 'my-new-contract'
_templates_suffix: '.j2'
```
Note that `_templates_suffix` must be different from the `_templates_suffix` defined in the root `copier.yaml` file. This is because the generator’s `copier.yaml` file is processed separately from the root `copier.yaml` file.
3. Develop your `generator` copier content and, when ready, test it by initiating a new project for your template and executing the generator command:
```shell
algokit generate
```
This should dynamically load and display your generator as an optional `cli` command that your template users can execute.
### Recommendations
[Section titled “Recommendations”](#recommendations)
* **Modularity**: Break your templates into modular components that can be combined in different ways.
* **Documentation**: Include README files and comments in your templates to explain how they should be used.
* **Versioning**: Use `.algokit.toml` to specify the minimum compatible version of AlgoKit.
* **Testing**: Include test configurations and scripts in your templates to encourage testing best practices.
* **Linting and Formatting**: Integrate linters and code formatters in your templates to ensure code quality.
* **Algokit Principle**: for details on generic principles for designing templates, refer to [algokit design principles](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/algokit.md#guiding-principles).
### Conclusion
[Section titled “Conclusion”](#conclusion)
Creating custom templates in AlgoKit is a powerful way to streamline your development workflow for Algorand smart contracts using Python or TypeScript. Leveraging Copier and Jinja for templating and incorporating best practices for modularity, documentation, and coding standards can result in robust, flexible, and user-friendly templates that can be valuable to your projects and the broader Algorand community.
Happy coding!
# Algorand Python Language Server
The Algorand Python language extension brings language server-powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed Python language server to provide Algorand Python-specific diagnostics and code actions.
This tutorial demonstrates how to set up and use the Algorand Python extension to identify and resolve common issues early in your development workflow. We’ll walk through identifying and fixing bugs in an Algorand Python smart contract using the extension’s diagnostic features.
## Prerequisites
[Section titled “Prerequisites”](#prerequisites)
* [Visual Studio Code](https://code.visualstudio.com/download) 1.80.0 or higher
* [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
* [Python](https://www.python.org/downloads/) 3.12 or higher
* [PuyaPy](https://pypi.org/project/puyapy/) 5.3.0 or higher
* Basic understanding of [Algorand smart contracts using Python](/concepts/smart-contracts/languages/python/)
Caution
The Algorand Python extension is currently in **beta**. It works alongside your existing Python language server (recommended with Pylance) to provide additional Algorand-specific diagnostics and code actions for smart contract development. There is currently some latency between making code changes and seeing updated diagnostics, which will be addressed in a future update.
## Step 1: Install the Extension
[Section titled “Step 1: Install the Extension”](#step-1-install-the-extension)
Install the Algorand Python language extension from the VSCode Marketplace:
1. Open the Extensions view in VSCode (`Ctrl+Shift+X` or `Cmd+Shift+X`)
2. Search for `Algorand Python`
3. Click `Install` on the extension published by the Algorand Foundation
Alternatively, install directly from the [marketplace](https://marketplace.visualstudio.com/items?itemName=AlgorandFoundation.algorand-python-vscode).

## Step 2: Set Up Your Project
[Section titled “Step 2: Set Up Your Project”](#step-2-set-up-your-project)
### Initialize an AlgoKit Project
[Section titled “Initialize an AlgoKit Project”](#initialize-an-algokit-project)
If you’re starting a new project, use AlgoKit to generate a Python smart contract project:
```bash
algokit init
```
Select options for a Python smart contract project from the interactive prompts.
If you haven’t installed algokit, follow these [steps](/getting-started/algokit-quick-start).
[Create Algokit project ](/getting-started/algokit-quick-start#create-an-algokit-project)Create your first Algokit project
### Install PuyaPy
[Section titled “Install PuyaPy”](#install-puyapy)
The extension requires PuyaPy version `5.3.0` or higher. Install it as a dev dependency in your project’s virtual environment:
```bash
# Activate your virtual environment first
pip install puyapy
```
We recommend installing PuyaPy in your project’s virtual environment to ensure the extension can automatically discover it. To check the version use:
```bash
puyapy --version
```
It should display `5.3.0` or higher.
## Step 3: Enable the Language Server
[Section titled “Step 3: Enable the Language Server”](#step-3-enable-the-language-server)
For new AlgoKit projects, the language server is enabled by default. For existing projects, you need to enable it manually:
1. Open your workspace settings (`File` > `Preferences` > `Settings` or `Cmd+,`)
2. Search for **algorandPython.languageServer.enable**
3. Check the box to enable the language server
Alternatively, add this to your `.vscode/settings.json`:
```json
{
"algorandPython.languageServer.enable": true
}
```
To see detailed information about what the language server is doing:
1. Open the `Output panel` (`View` > `Output` or `Ctrl+Shift+U`)
2. Select `Algorand Python Language Server` from the dropdown
3. Review logs for diagnostics and extension activity

## Step 4: Create an Example Smart Contract
[Section titled “Step 4: Create an Example Smart Contract”](#step-4-create-an-example-smart-contract)
Let’s create a simple contract with a deliberate bug to demonstrate the extension’s capabilities. Replace the Hello World contract with the below contract:
* Python
```py
from algopy import ARC4Contract, arc4, Txn, BoxMap
from algopy.arc4 import abimethod
class UserVotes(ARC4Contract):
def __init__(self) -> None:
# Each user can vote for multiple proposals
self.votes = BoxMap(arc4.Address, arc4.DynamicArray[arc4.UInt64])
@abimethod()
def cast_vote(self, proposal_id: arc4.UInt64) -> arc4.DynamicArray[arc4.UInt64]:
voter = arc4.Address(Txn.sender)
if voter in self.votes:
current_votes = self.votes[voter].copy()
current_votes.append(proposal_id)
# Bug in next line: mutable reference to ARC-4-encoded value must be copied using .copy() when being assigned
self.votes[voter] = current_votes
else:
self.votes[voter] = arc4.DynamicArray[arc4.UInt64](proposal_id)
return self.votes[voter]
```
This contract contains an intentional bug when updating the `votes` BoxMap in the `cast_vote` function.
## Step 5: Observe Real-Time Diagnostics
[Section titled “Step 5: Observe Real-Time Diagnostics”](#step-5-observe-real-time-diagnostics)
Once you save the file, the Algorand Python extension will analyze your code. You should see a red squiggly line under `current_votes` in the if condition.
The extension will display the error `mutable reference to ARC-4-encoded value must be copied using .copy() when being assigned to another variable` in the contract when you hover over the red line.

## Step 6: Apply Quick Fixes
[Section titled “Step 6: Apply Quick Fixes”](#step-6-apply-quick-fixes)
The extension also provides quick fixes for the issue. Look for the lightbulb icon (💡) that appears. It suggests the fix `💡 Add .copy()`. Click on the suggestion to add the fix.

## Step 7: Fixed Smart Contract
[Section titled “Step 7: Fixed Smart Contract”](#step-7-fixed-smart-contract)
Based on the extension’s diagnostics, your contract should now be updated as follows to address the identified issue:
* Python
```py
from algopy import ARC4Contract, arc4, Txn, BoxMap
from algopy.arc4 import abimethod
class UserVotes(ARC4Contract):
def __init__(self) -> None:
# Each user can vote for multiple proposals
self.votes = BoxMap(arc4.Address, arc4.DynamicArray[arc4.UInt64])
@abimethod()
def cast_vote(self, proposal_id: arc4.UInt64) -> arc4.DynamicArray[arc4.UInt64]:
voter = arc4.Address(Txn.sender)
if voter in self.votes:
current_votes = self.votes[voter].copy()
current_votes.append(proposal_id)
# Bug Fixed in next line: Added mutable reference to ARC-4-encoded value must be copied using .copy() when being assigned
self.votes[voter] = current_votes.copy()
else:
self.votes[voter] = arc4.DynamicArray[arc4.UInt64](proposal_id)
return self.votes[voter]
```
## Step 8: Verify the Fixes
[Section titled “Step 8: Verify the Fixes”](#step-8-verify-the-fixes)
After applying the fixes, verify that warnings and errors have cleared in the Problems Panel. The extension will continue to provide real-time feedback as you progress in your development.

## Configuration Options
[Section titled “Configuration Options”](#configuration-options)
The extension provides additional configuration options for customizing your experience:
### Enable/Disable Language Server
[Section titled “Enable/Disable Language Server”](#enabledisable-language-server)
```json
{
"algorandPython.languageServer.enable": true
}
```
### Log Level
[Section titled “Log Level”](#log-level)
Adjust the verbosity of messages in the Output panel:
```json
{
"algorandPython.languageServer.logLevel": "info"
}
```
Available levels: `error`, `warning`, `info`, `debug`
### Debounce Interval
[Section titled “Debounce Interval”](#debounce-interval)
Configure the delay between code changes and diagnostic updates:
```json
{
"algorandPython.languageServer.debounceInterval": 500
}
```
Value in milliseconds. Lower values provide faster feedback but may impact performance.
## Troubleshooting
[Section titled “Troubleshooting”](#troubleshooting)
If the extension isn’t working as expected:
### Extension Not Providing Diagnostics
[Section titled “Extension Not Providing Diagnostics”](#extension-not-providing-diagnostics)
1. Verify the extension is installed and enabled:
* Check Extensions view for `Algorand Python`
* Ensure it shows as `Enabled`
2. Confirm both extensions are installed:
* Python extension for Visual Studio Code
* Algorand Python extension
3. Verify the language server is enabled:
* Check workspace settings for `algorandPython.languageServer.enable`
* Should be set to `true`
4. Verify PuyaPy installation:
```bash
pip show puyapy
```
* Ensure version `5.3.0` or higher is installed
* Confirm it’s in the same virtual environment VS Code is using
### Check Python Interpreter
[Section titled “Check Python Interpreter”](#check-python-interpreter)
Make sure VS Code is using the correct Python interpreter:
1. Click on the Python version in the status bar (bottom right)
2. Select the interpreter from your project’s virtual environment
3. The interpreter should have PuyaPy installed
### File Not Recognized
[Section titled “File Not Recognized”](#file-not-recognized)
The extension activates for `.py` files in Algorand Python projects. Ensure:
* Your file is a Python file with `.py` extension
* The file contains Algorand Python imports (e.g., `from algopy import ...`)
### Check Language Server Logs
[Section titled “Check Language Server Logs”](#check-language-server-logs)
1. Open Output panel (`View` > `Output`)
2. Select `Algorand Python Language Server` from the dropdown
3. Look for error messages or warnings
4. Set log level to `debug` for more detailed information
## Summary
[Section titled “Summary”](#summary)
In this tutorial, we covered:
* Installing and configuring the Algorand Python language extension
* Setting up a project
* Using real-time diagnostics to identify issues
* Applying quick fixes to resolve common problems
* Troubleshooting common extension issues
The Algorand Python extension provides valuable assistance throughout your development process, helping you write more correct and robust smart contracts by catching issues early and suggesting improvements as you code.
## Next Steps
[Section titled “Next Steps”](#next-steps)
Learn Algorand Python concepts, write and test smart contracts, debug with AVM Debugger, and follow best practices.
[Algorand Python Concepts ](/concepts/smart-contracts/languages/python)Explore Algorand Python concepts
[Unit Tests ](/algokit/unit-testing/python/overview)Learn about unit testing Python contracts
[AVM Debugger ](/algokit/avm-debugger)Try the AVM Debugger
# Algorand TypeScript Language Server
The Algorand TypeScript language extension brings language server-powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed TypeScript language server to provide Algorand TypeScript-specific diagnostics and code actions.
This tutorial demonstrates how to set up and use the Algorand Typescript extension to identify and resolve common issues early in your development workflow. We’ll walk through identifying and fixing bugs in an Algorand Typescript smart contract using the extension’s diagnostic features.
## Prerequisites
[Section titled “Prerequisites”](#prerequisites)
* [Visual Studio Code](https://code.visualstudio.com/download) 1.80.0 or higher
* [puya-ts](https://www.npmjs.com/package/@algorandfoundation/puya-ts) 1.0.1 or higher
* Basic understanding of [Algorand smart contracts using TypeScript](/concepts/smart-contracts/languages/typescript/)
Caution
The Algorand TypeScript extension is currently in **beta**. It works alongside your existing TypeScript language server to provide additional Algorand-specific diagnostics and code actions for smart contract development.
## Step 1: Install the Extension
[Section titled “Step 1: Install the Extension”](#step-1-install-the-extension)
Install the Algorand TypeScript language extension from the VSCode Marketplace:
1. Open the Extensions view in VSCode (`Ctrl+Shift+X` or `Cmd+Shift+X`)
2. Search for `Algorand TypeScript`
3. Click `Install` on the extension published by the Algorand Foundation
Alternatively, install directly from the [marketplace](https://marketplace.visualstudio.com/items?itemName=AlgorandFoundation.algorand-typescript-vscode).

## Step 2: Set Up Your Project
[Section titled “Step 2: Set Up Your Project”](#step-2-set-up-your-project)
### Initialize an AlgoKit Project
[Section titled “Initialize an AlgoKit Project”](#initialize-an-algokit-project)
If you’re starting a new project, use AlgoKit to generate a TypeScript smart contract project:
```bash
algokit init
```
Select options for a TypeScript smart contract project from the interactive prompts.
If you haven’t installed algokit, follow these [steps](/getting-started/algokit-quick-start).
[Create Algokit project ](/getting-started/algokit-quick-start#create-an-algokit-project)Create your first Algokit project
### Install puya-ts
[Section titled “Install puya-ts”](#install-puya-ts)
The extension requires `puya-ts` version `1.0.1` or higher. Install it as a dev dependency in your project:
```bash
npm install --save-dev @algorandfoundation/puya-ts
```
## Step 3: Enable the Language Server
[Section titled “Step 3: Enable the Language Server”](#step-3-enable-the-language-server)
For new AlgoKit projects, the language server is enabled by default. For existing projects, you need to enable it manually:
1. Open your workspace settings (`File` > `Preferences` > `Settings` or `Cmd+,`)
2. Search for `algorandTypeScript.languageServer.enable`
3. Check the box to enable the language server
Alternatively, add this to your `.vscode/settings.json`:
```json
{
"algorandTypeScript.languageServer.enable": true
}
```
To see detailed information about what the language server is doing:
1. Open the `Output panel` (`View` > `Output` or `Ctrl+Shift+U`)
2. Select `Algorand TypeScript Language Server` from the dropdown
3. Review logs for diagnostics and extension activity

## Step 4: Create an Example Smart Contract
[Section titled “Step 4: Create an Example Smart Contract”](#step-4-create-an-example-smart-contract)
Let’s create a simple contract with a deliberate bug to demonstrate the extension’s capabilities. Replace the Hello World contract with the below contract:
* TypeScript
```ts
import { BoxMap, clone, Contract, Txn } from '@algorandfoundation/algorand-typescript'
import { Address, DynamicArray, Uint64 } from '@algorandfoundation/algorand-typescript/arc4'
export class UserVotes extends Contract {
// Each user can vote for multiple proposals
votes = BoxMap>({ keyPrefix: '' })
castVote(proposalId: Uint64): DynamicArray {
const voter = new Address(Txn.sender)
if (this.votes(voter).exists) {
// Bug in line below: cannot create multiple references to a mutable stack type, the value must be copied using clone(...) when being assigned to another variable
const currentVotes = this.votes(voter).value
currentVotes.push(proposalId)
this.votes(voter).value = clone(currentVotes)
} else {
this.votes(voter).value = new DynamicArray(proposalId)
}
return this.votes(voter).value
}
}
```
This contract contains an intentional bug when updating the `current_vote` in the `cast_vote` function.
## Step 5: Observe Real-Time Diagnostics
[Section titled “Step 5: Observe Real-Time Diagnostics”](#step-5-observe-real-time-diagnostics)
Once you save the file, the Algorand TypeScript extension will analyze your code. You should see a red squiggly line under `current_votes` in the if condition.
The extension will display the error `cannot create multiple references to a mutable stack type, the value must be copied using clone(...) when being assigned to another variable` in the contract when you hover over the red line.

## Step 6: Apply Quick Fixes
[Section titled “Step 6: Apply Quick Fixes”](#step-6-apply-quick-fixes)
The extension also provides quick fixes for the issue. Look for the lightbulb icon (💡) that appears. It suggests the fix `💡 Wrap expression in clone(...)`. Click on the suggestion to add the fix.

## Step 7: Fix the Smart Contract
[Section titled “Step 7: Fix the Smart Contract”](#step-7-fix-the-smart-contract)
Based on the extension’s diagnostics, your contract should now be updated as follows to address the identified issue:
* TypeScript
```py
import { BoxMap, clone, Contract, Txn } from '@algorandfoundation/algorand-typescript'
import { Address, DynamicArray, Uint64 } from '@algorandfoundation/algorand-typescript/arc4'
export class UserVotes extends Contract {
// Each user can vote for multiple proposals
votes = BoxMap>({ keyPrefix: '' })
castVote(proposalId: Uint64): DynamicArray {
const voter = new Address(Txn.sender)
if (this.votes(voter).exists) {
// Bug Fixed in next line: wrapped expression in clone(...) when being assigned to another variable
const currentVotes = clone(this.votes(voter).value)
currentVotes.push(proposalId)
this.votes(voter).value = clone(currentVotes)
} else {
this.votes(voter).value = new DynamicArray(proposalId)
}
return this.votes(voter).value
}
}
```
## Step 8: Verify the Fixes
[Section titled “Step 8: Verify the Fixes”](#step-8-verify-the-fixes)
After applying the fixes, verify that warnings and errors have cleared in the Problems Panel. The extension will continue to provide real-time feedback as you progress in your development.

## Configuration Options
[Section titled “Configuration Options”](#configuration-options)
The extension provides additional configuration options for customizing your experience:
### Log Level
[Section titled “Log Level”](#log-level)
Adjust the verbosity of messages in the Output panel:
```json
{
"algorandTypeScript.languageServer.logLevel": "info"
}
```
Available levels: `off`, `error`, `warn`, `info`, `debug`, `trace`
## Troubleshooting
[Section titled “Troubleshooting”](#troubleshooting)
If the extension isn’t working as expected:
### Extension Not Providing Diagnostics
[Section titled “Extension Not Providing Diagnostics”](#extension-not-providing-diagnostics)
1. Verify the extension is installed and enabled:
* Check Extensions view for `Algorand TypeScript`
* Ensure it shows as `Enabled`
2. Confirm the language server is enabled:
* Check workspace settings for `algorandTypeScript.languageServer.enable`
* Should be set to `true`
3. Verify puya-ts installation:
```bash
npm list @algorandfoundation/puya-ts
```
* Ensure version `1.0.1` or higher is installed
### File Not Recognized
[Section titled “File Not Recognized”](#file-not-recognized)
The extension only activates for `.algo.ts` files. Ensure your smart contract files use this extension.
### Conflicting Diagnostics
[Section titled “Conflicting Diagnostics”](#conflicting-diagnostics)
If you see duplicate or conflicting messages:
* The extension works alongside the standard TypeScript language server
* Some messages come from TypeScript, others from the Algorand extension
* Both sets of diagnostics are valuable for different aspects of your code
### Check Language Server Logs
[Section titled “Check Language Server Logs”](#check-language-server-logs)
1. Open Output panel (`View` > `Output`)
2. Select `Algorand TypeScript Language Server` from dropdown
3. Look for error messages or warnings
4. Set log level to `debug` for more detailed information
## Summary
[Section titled “Summary”](#summary)
In this tutorial, we covered:
* Installing and configuring the Algorand TypeScript language extension
* Setting up a project with puya-ts
* Using real-time diagnostics to identify issues
* Applying quick fixes to resolve common problems
* Troubleshooting common extension issues
The Algorand TypeScript extension provides valuable assistance throughout your development process, helping you write more correct and robust smart contracts by catching issues early and suggesting improvements as you code.
## Next Steps
[Section titled “Next Steps”](#next-steps)
Learn Algorand TypeScript concepts, write and test smart contracts, debug with AVM Debugger, and follow best practices.
[Algorand TypeScript Concepts ](/concepts/smart-contracts/languages/typescript)Explore Algorand TypeScript concepts
[Unit Tests ](/algokit/unit-testing/typescript/overview)Learn about unit testing TypeScript contracts
[AVM Debugger ](/algokit/avm-debugger)Try the AVM Debugger
# PuyaPy compiler
The PuyaPy compiler is a multi-stage, optimising compiler that takes Algorand Python and prepares it for execution on the AVM. PuyaPy ensures the resulting AVM bytecode has execution semantics that match the given Python code. PuyaPy produces output that is directly compatible with [AlgoKit typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients) to make deployment and calling easy (among other formats).
The PuyaPy compiler is based on the [Puya compiler architecture](https://github.com/algorandfoundation/puya/blob/main/ARCHITECTURE.md), which allows for multiple frontend languages to leverage the majority of the compiler logic so adding new frontend languages for execution on Algorand is relatively easy.
## Compiler installation
[Section titled “Compiler installation”](#compiler-installation)
The minimum supported Python version for running the PuyaPy compiler is 3.12.
There are three ways of installing the PuyaPy compiler.
1. You can install [AlgoKit CLI](https://github.com/algorandfoundation/algokit-cli?tab=readme-ov-file#install) and you can then use the `algokit compile py` command.
2. You can install the PuyaPy compiler into your project and thus lock the compiler version for that project:
```shell
pip install puyapy
# OR
poetry add puyapy --group=dev
```
Note: if you do this then when you use`algokit compile py` within that project directory it will invoke the installed compiler rather than a global one.
3. You can install the compiler globally using [pipx](https://pipx.pypa.io/stable/):
```shell
pipx install puya
```
Alternatively, it can be installed per project. For example, if you’re using [poetry](https://python-poetry.org), you can install it as a dev-dependency like so:
```shell
poetry add puyapy --group=dev
```
If you just want to play with some examples, you can clone the repo and have a poke around:
```shell
git clone https://github.com/algorandfoundation/puya.git
cd puya
poetry install
poetry shell
# compile the "Hello World" example
puyapy examples/hello_world
```
## Using the compiler
[Section titled “Using the compiler”](#using-the-compiler)
To check that you can run the compiler successfully after installation, you can run the help command:
```default
puyapy -h
# OR
algokit compile py -h
```
To compile a contract or contracts, just supply the path(s) - either to the .py files themselves, or the containing directories. In the case of containing directories, any (non-abstract) contracts discovered therein will be compiled, allowing you to compile multiple contracts at once. You can also supply more than one path at a time to the compiler.
e.g. either `puyapy my_project/` or `puyapy my_project/contract.py` will work to compile a single contract.
## Type checking
[Section titled “Type checking”](#type-checking)
The first and second steps of the [compiler pipeline](https://github.com/algorandfoundation/puya/blob/main/ARCHITECTURE.md) are significant to note, because it’s where we perform type checking. We leverage [MyPy](https://mypy-lang.org/) to do this, so we recommend that you install and use the latest version of MyPy in your development environment to get the best typing information that aligns to what the PuyaPy compiler expects. This should work with standard Python tooling e.g. with Visual Studio Code, PyCharm, et. al.
The easiest way to get a productive development environment with Algorand Python is to instantiate a template with AlgoKit via `algokit init -t python`. This will give you a full development environment with intellisense, linting, automatic formatting, breakpoint debugging, deployment and CI/CD.
Alternatively, you can construct your own environment by configuring MyPy, Ruff, etc. with the same configuration files [used by that template](https://github.com/algorandfoundation/algokit-python-template).
The MyPy config that PuyaPy uses is in [parse.py](https://github.com/algorandfoundation/puya/blob/8be90f7c84ecd6eaa972e4dbf82f3ec7a616fc75/src/puyapy/parse.py#L274)
## Compiler usage
[Section titled “Compiler usage”](#compiler-usage)
The options available for the compile can be seen by executing `puyapy -h` or `algokit compile py -h`:
```default
Usage: puyapy [ARGS] [OPTIONS]
PuyaPy compiler for compiling Algorand Python to TEAL
╭─ Commands ─────────────────────────────────────────────────────────────────────────────╮
│ --help -h Display this message and exit. │
│ --version Display application version. │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────╮
│ * PATH Files or directories to compile [required] │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Outputs ──────────────────────────────────────────────────────────────────────────────╮
│ Options for controlling what is output and where │
│ │
│ --out-dir Path for outputting artefacts │
│ --log-level Minimum level to log to console [choices: │
│ notset, debug, info, warning, error, critical] │
│ [default: info] │
│ --output-teal --no-output-teal Output TEAL code [default: True] │
│ --output-source-map Output debug source maps [default: True] │
│ --no-output-source-map │
│ --output-arc56 --no-output-arc56 Output {contract}.arc56.json ARC-56 app spec │
│ file [default: True] │
│ --output-arc32 --no-output-arc32 Output {contract}.arc32.json ARC-32 app spec │
│ file [default: False] │
│ --output-bytecode Output AVM bytecode [default: False] │
│ --no-output-bytecode │
│ --output-client Output Algorand Python contract client for typed │
│ --no-output-client ARC-4 ABI calls [default: False] │
│ --debug-level -g Output debug information level, 0 = none, 1 = │
│ debug, 2 = reserved for future use [choices: 0, │
│ 1, 2] [default: 1] │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Compilation ──────────────────────────────────────────────────────────────────────────╮
│ Options that affect the compilation process, such as optimisation options etc. │
│ │
│ --optimization-level -O Set optimization level of output TEAL / AVM bytecode │
│ [choices: 0, 1, 2] [default: 1] │
│ --target-avm-version Target AVM version [choices: 10, 11, 12, 13] │
│ [default: 11] │
│ --resource-encoding If "index", then resource types (Application, Asset, │
│ Account) in ABI methods should be passed as an index │
│ into their appropriate foreign array. The default │
│ option "value", as of PuyaPy 5.0, means these values │
│ will be passed directly. [choices: index, value] │
│ [default: value] │
│ --locals-coalescing-strategy Strategy choice for out-of-ssa local variable │
│ coalescing. The best choice for your app is best │
│ determined through experimentation [choices: │
│ root-operand, root-operand-excluding-args, │
│ aggressive] [default: root-operand] │
│ --validate-abi-args Validates ABI transaction arguments by ensuring they │
│ --no-validate-abi-args are encoded correctly [default: True] │
│ --validate-abi-return Validates encoding of ABI return values when using │
│ --no-validate-abi-return .from_log(), arc4.abi_call, arc4.arc4_create and │
│ arc4.arc4_update [default: True] │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Templating ───────────────────────────────────────────────────────────────────────────╮
│ Options for controlling the generation of TEAL template files │
│ │
│ --template-var -T Define template vars for use when assembling via │
│ --output-bytecode. Should be specified without the prefix │
│ (see --template-vars-prefix), e.g. -T SOME_INT=1234 │
│ SOME_BYTES=0x1A2B SOME_BOOL=True -T SOME_STR="hello" │
│ --template-vars-prefix Define the prefix to use with --template-var [default: │
│ TMPL_] │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Additional outputs ───────────────────────────────────────────────────────────────────╮
│ Controls additional compiler outputs that may be useful to compiler developers. │
│ │
│ --output-awst Output parsed result of AWST [default: False] │
│ --output-awst-json Output parsed result of AWST as JSON [default: │
│ False] │
│ --output-source-annotations-json Output source annotations result of AWST parse as │
│ JSON [default: False] │
│ --output-ssa-ir Output IR (in SSA form) before optimizations │
│ [default: False] │
│ --output-optimization-ir Output IR after each optimization [default: False] │
│ --output-destructured-ir Output IR after SSA destructuring and before MIR │
│ [default: False] │
│ --output-memory-ir Output MIR before lowering to TEAL [default: False] │
│ --output-teal-intermediates Output TEAL before peephole optimization and before │
│ block optimization [default: False] │
│ --output-op-statistics Output statistics about ops used for each program │
│ compiled optimization_level: Set optimization level │
│ of output TEAL / AVM bytecode [default: False] │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
### Defining template values
[Section titled “Defining template values”](#defining-template-values)
[Template Variables](/docs/algorand-python/python/latest/api/api-algopy#algopy.TemplateVar), can be replaced with literal values during compilation to bytecode using the `--template-var` option. Additionally, Algorand Python functions that create AVM bytecode, such as [compile\_contract](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_contract) and [compile\_logicsig](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_logicsig), can also provide the specified values.
#### Examples of Variable Definitions
[Section titled “Examples of Variable Definitions”](#examples-of-variable-definitions)
The table below illustrates how different variables and values can be defined:
| Variable Type | Example Algorand Python | Value definition example |
| -------------------------------------------------------------------------- | -------------------------------------------------------------- | ------------------------ |
| [UInt64](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) | `algopy.TemplateVar[UInt64](docs/_build/markdown/"SOME_INT")` | `SOME_INT=1234` |
| [Bytes](/docs/algorand-python/python/latest/api/api-algopy#algopy.Bytes) | `algopy.TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES")` | `SOME_BYTES=0x1A2B` |
| [String](/docs/algorand-python/python/latest/api/api-algopy#algopy.String) | `algopy.TemplateVar[String](docs/_build/markdown/"SOME_STR")` | `SOME_STR="hello"` |
All template values specified via the command line are prefixed with “TMPL\_” by default. The default prefix can be modified using the `--template-vars-prefix` option.
## Sample `pyproject.toml`
[Section titled “Sample pyproject.toml”](#sample-pyprojecttoml)
A sample `pyproject.toml` file with known good configuration is:
```ini
[tool.poetry]
name = "algorand_python_contract"
version = "0.1.0"
description = "Algorand smart contracts"
authors = ["Name "]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.12"
algokit-utils = "^2.2.0"
python-dotenv = "^1.0.0"
algorand-python = "^1.0.0"
[tool.poetry.group.dev.dependencies]
black = { extras = ["d"], version = "*" }
ruff = "^0.1.6"
mypy = "*"
pytest = "*"
pytest-cov = "*"
pip-audit = "*"
pre-commit = "*"
puyapy = "^1.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.ruff]
line-length = 120
select = [
"E",
"F",
"ANN",
"UP",
"N",
"C4",
"B",
"A",
"YTT",
"W",
"FBT",
"Q",
"RUF",
"I",
]
ignore = [
"ANN101", # no type for self
"ANN102", # no type for cls
]
unfixable = ["B", "RUF"]
[tool.ruff.flake8-annotations]
allow-star-arg-any = true
suppress-none-returning = true
[tool.pytest.ini_options]
pythonpath = ["smart_contracts", "tests"]
[tool.mypy]
files = "smart_contracts/"
python_version = "3.12"
disallow_any_generics = true
disallow_subclassing_any = true
disallow_untyped_calls = true
disallow_untyped_defs = true
disallow_incomplete_defs = true
check_untyped_defs = true
disallow_untyped_decorators = true
warn_redundant_casts = true
warn_unused_ignores = true
warn_return_any = true
strict_equality = true
strict_concatenate = true
disallow_any_unimported = true
disallow_any_expr = true
disallow_any_decorated = true
disallow_any_explicit = true
```
# Language Guide
Algorand Python is conceptually two things:
1. A partial implementation of the Python programming language that runs on the AVM.
2. A framework for development of Algorand smart contracts and logic signatures, with Pythonic interfaces to underlying AVM functionality.
You can install the Algorand Python types from PyPi:
> `pip install algorand-python`
or
> `poetry add algorand-python`
***
As a partial implementation of the Python programming language, it maintains the syntax and semantics of Python. The subset of the language that is supported will grow over time, but it will never be a complete implementation due to the restricted nature of the AVM as an execution environment. As a trivial example, the `async` and `await` keywords, and all associated features, do not make sense to implement.
Being a partial implementation of Python means that existing developer tooling like IDE syntax highlighting, static type checkers, linters, and auto-formatters, will work out-of-the-box. This is as opposed to an approach to smart contract development that adds or alters language elements or semantics, which then requires custom developer tooling support, and more importantly, requires the developer to learn and understand the potentially non-obvious differences from regular Python.
The greatest advantage to maintaining semantic and syntactic compatibility, however, is only realised in combination with the framework approach. Supplying a set of interfaces representing smart contract development and AVM functionality required allows for the possibility of implementing those interfaces in pure Python! This will make it possible in the near future for you to execute tests against your smart contracts without deploying them to Algorand, and even step into and break-point debug your code from those tests.
The framework provides interfaces to the underlying AVM types and operations. By virtue of the AVM being statically typed, these interfaces are also statically typed, and require your code to be as well.
The most basic types on the AVM are `uint64` and `bytes[]`, representing unsigned 64-bit integers and byte arrays respectively. These are represented by [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) and [`Bytes`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Bytes) in Algorand Python. There are further “bounded” types supported by the AVM which are backed by these two simple primitives. For example, `bigint` represents a variably sized (up to 512-bits), unsigned integer, but is actually backed by a `bytes[]`. This is represented by [`BigUInt`](/docs/algorand-python/python/latest/api/api-algopy#algopy.BigUInt) in Algorand Python.
Unfortunately, none of these types map to standard Python primitives. In Python, an `int` is unsigned, and effectively unbounded. A `bytes` similarly is limited only by the memory available, whereas an AVM `bytes[]` has a maximum length of 4096. In order to both maintain semantic compatibility and allow for a framework implementation in plain Python that will fail under the same conditions as when deployed to the AVM, support for Python primitives is [limited](/algokit/languages/python/lg-types/#python-built-in-types).
For more information on the philosophy and design of Algorand Python, please see [“Principles”](/algokit/languages/python/principles/#principles).
If you aren’t familiar with Python, a good place to start before continuing below is with the [official tutorial](https://docs.python.org/3/tutorial/index.html). Just beware that as mentioned above, [not all features are supported](/algokit/languages/python/lg-unsupported-python-features/).
## Table of Contents
[Section titled “Table of Contents”](#table-of-contents)
* [Program structure](/algokit/languages/python/lg-structure/)
* [Modules](/algokit/languages/python/lg-structure/#modules)
* [Typing](/algokit/languages/python/lg-structure/#typing)
* [Subroutines](/algokit/languages/python/lg-structure/#subroutines)
* [Contract classes](/algokit/languages/python/lg-structure/#contract-classes)
* [Contract class configuration](/algokit/languages/python/lg-structure/#contract-class-configuration)
* [Example: Simplest possible `algopy.Contract` implementation](/algokit/languages/python/lg-structure/#example-simplest-possible-algopy-contract-implementation)
* [Example: Simple call counter](/algokit/languages/python/lg-structure/#example-simple-call-counter)
* [Example: Simplest possible `algopy.ARC4Contract` implementation](/algokit/languages/python/lg-structure/#example-simplest-possible-algopy-arc4contract-implementation)
* [Example: An ARC-4 call counter](/algokit/languages/python/lg-structure/#example-an-arc-4-call-counter)
* [Logic signatures](/algokit/languages/python/lg-structure/#logic-signatures)
* [Types](/algokit/languages/python/lg-types/)
* [AVM types](/algokit/languages/python/lg-types/#avm-types)
* [UInt64](/algokit/languages/python/lg-types/#uint64)
* [Bytes](/algokit/languages/python/lg-types/#bytes)
* [String](/algokit/languages/python/lg-types/#string)
* [BigUInt](/algokit/languages/python/lg-types/#biguint)
* [bool](/algokit/languages/python/lg-types/#bool)
* [Account](/algokit/languages/python/lg-types/#account)
* [Asset](/algokit/languages/python/lg-types/#asset)
* [Application](/algokit/languages/python/lg-types/#application)
* [Python built-in types](/algokit/languages/python/lg-types/#python-built-in-types)
* [bool](/algokit/languages/python/lg-types/#id2)
* [tuple](/algokit/languages/python/lg-types/#tuple)
* [typing.NamedTuple](/algokit/languages/python/lg-types/#typing-namedtuple)
* [None](/algokit/languages/python/lg-types/#none)
* [int, str, bytes, float](/algokit/languages/python/lg-types/#int-str-bytes-float)
* [Template variables](/algokit/languages/python/lg-types/#template-variables)
* [ARC-4 types](/algokit/languages/python/lg-types/#arc-4-types)
* [Type Validation](/algokit/languages/python/lg-types/#type-validation)
* [Validated Sources of Values](/algokit/languages/python/lg-types/#validated-sources-of-values)
* [Non-Validated Sources](/algokit/languages/python/lg-types/#non-validated-sources)
* [Control flow structures](/algokit/languages/python/lg-control/)
* [If statements](/algokit/languages/python/lg-control/#if-statements)
* [Ternary conditions](/algokit/languages/python/lg-control/#ternary-conditions)
* [While loops](/algokit/languages/python/lg-control/#while-loops)
* [For Loops](/algokit/languages/python/lg-control/#for-loops)
* [Match Statements](/algokit/languages/python/lg-control/#match-statements)
* [Module level constructs](/algokit/languages/python/lg-modules/)
* [Module constants](/algokit/languages/python/lg-modules/#module-constants)
* [If statements](/algokit/languages/python/lg-modules/#if-statements)
* [Integer math](/algokit/languages/python/lg-modules/#integer-math)
* [Strings](/algokit/languages/python/lg-modules/#strings)
* [Type aliases](/algokit/languages/python/lg-modules/#type-aliases)
* [Python builtins](/algokit/languages/python/lg-builtins/)
* [len](/algokit/languages/python/lg-builtins/#len)
* [range](/algokit/languages/python/lg-builtins/#range)
* [enumerate](/algokit/languages/python/lg-builtins/#enumerate)
* [reversed](/algokit/languages/python/lg-builtins/#reversed)
* [types](/algokit/languages/python/lg-builtins/#types)
* [Error handling and assertions](/algokit/languages/python/lg-errors/)
* [Assertions](/algokit/languages/python/lg-errors/#assertions)
* [Assertion error handling](/algokit/languages/python/lg-errors/#assertion-error-handling)
* [Explicit failure](/algokit/languages/python/lg-errors/#explicit-failure)
* [Exception handling](/algokit/languages/python/lg-errors/#exception-handling)
* [Data structures](/algokit/languages/python/lg-data-structures/)
* [Mutability vs Immutability](/algokit/languages/python/lg-data-structures/#mutability-vs-immutability)
* [Static size vs Dynamic size](/algokit/languages/python/lg-data-structures/#static-size-vs-dynamic-size)
* [Size constraints](/algokit/languages/python/lg-data-structures/#size-constraints)
* [Algorand Python composite types](/algokit/languages/python/lg-data-structures/#algorand-python-composite-types)
* [`tuple`](/algokit/languages/python/lg-data-structures/#tuple)
* [`typing.NamedTuple`](/algokit/languages/python/lg-data-structures/#typing-namedtuple)
* [`Struct`](/algokit/languages/python/lg-data-structures/#struct)
* [`arc4.Tuple`](/algokit/languages/python/lg-data-structures/#arc4-tuple)
* [`arc4.Struct`](/algokit/languages/python/lg-data-structures/#arc4-struct)
* [Algorand Python array types](/algokit/languages/python/lg-data-structures/#algorand-python-array-types)
* [`algopy.FixedArray`](/algokit/languages/python/lg-data-structures/#algopy-fixedarray)
* [`algopy.Array`](/algokit/languages/python/lg-data-structures/#algopy-array)
* [`algopy.ReferenceArray`](/algokit/languages/python/lg-data-structures/#algopy-referencearray)
* [`algopy.ImmutableArray`](/algokit/languages/python/lg-data-structures/#algopy-immutablearray)
* [`algopy.arc4.DynamicArray` / `algopy.arc4.StaticArray`](/algokit/languages/python/lg-data-structures/#algopy-arc4-dynamicarray-algopy-arc4-staticarray)
* [Tips](/algokit/languages/python/lg-data-structures/#tips)
* [Storing data on-chain](/algokit/languages/python/lg-storage/)
* [Global storage](/algokit/languages/python/lg-storage/#global-storage)
* [Local storage](/algokit/languages/python/lg-storage/#local-storage)
* [Box storage](/algokit/languages/python/lg-storage/#box-storage)
* [Scratch storage](/algokit/languages/python/lg-storage/#scratch-storage)
* [Logging](/algokit/languages/python/lg-logs/)
* [Transactions](/algokit/languages/python/lg-transactions/)
* [Group Transactions](/algokit/languages/python/lg-transactions/#group-transactions)
* [ARC-4 parameter](/algokit/languages/python/lg-transactions/#arc-4-parameter)
* [Group Index](/algokit/languages/python/lg-transactions/#group-index)
* [Inner Transactions](/algokit/languages/python/lg-transactions/#inner-transactions)
* [Examples](/algokit/languages/python/lg-transactions/#examples)
* [Limitations](/algokit/languages/python/lg-transactions/#limitations)
* [AVM operations](/algokit/languages/python/lg-ops/)
* [Txn](/algokit/languages/python/lg-ops/#txn)
* [Global](/algokit/languages/python/lg-ops/#global)
* [Opcode budgets](/algokit/languages/python/lg-opcode-budget/)
* [ARC-4: Application Binary Interface](/algokit/languages/python/lg-arc4/)
* [ARC-32 and ARC-56](/algokit/languages/python/lg-arc4/#arc-32-and-arc-56)
* [Methods](/algokit/languages/python/lg-arc4/#methods)
* [Router](/algokit/languages/python/lg-arc4/#router)
* [Types](/algokit/languages/python/lg-arc4/#types)
* [Booleans](/algokit/languages/python/lg-arc4/#booleans)
* [Unsigned ints](/algokit/languages/python/lg-arc4/#unsigned-ints)
* [Unsigned fixed point decimals](/algokit/languages/python/lg-arc4/#unsigned-fixed-point-decimals)
* [Bytes and strings](/algokit/languages/python/lg-arc4/#bytes-and-strings)
* [Static arrays](/algokit/languages/python/lg-arc4/#static-arrays)
* [Address](/algokit/languages/python/lg-arc4/#address)
* [Dynamic arrays](/algokit/languages/python/lg-arc4/#dynamic-arrays)
* [Tuples](/algokit/languages/python/lg-arc4/#tuples)
* [Structs](/algokit/languages/python/lg-arc4/#structs)
* [ARC-4 Container Packing](/algokit/languages/python/lg-arc4/#arc-4-container-packing)
* [Reference types](/algokit/languages/python/lg-arc4/#reference-types)
* [Mutability](/algokit/languages/python/lg-arc4/#mutability)
* [ARC-28: Structured event logging](/algokit/languages/python/lg-arc28/)
* [Emitting Events](/algokit/languages/python/lg-arc28/#emitting-events)
* [Calling other applications](/algokit/languages/python/lg-calling-apps/)
* [`algopy.arc4.abi_call`](/algokit/languages/python/lg-calling-apps/#algopy-arc4-abi-call)
* [Alternative ways to use `arc4.abi_call`](/algokit/languages/python/lg-calling-apps/#alternative-ways-to-use-arc4-abi-call)
* [`algopy.arc4.arc4_create`](/algokit/languages/python/lg-calling-apps/#algopy-arc4-arc4-create)
* [`algopy.arc4.arc4_update`](/algokit/languages/python/lg-calling-apps/#algopy-arc4-arc4-update)
* [Using `itxn.ApplicationCall`](/algokit/languages/python/lg-calling-apps/#using-itxn-applicationcall)
* [Compiling to AVM bytecode](/algokit/languages/python/lg-compile/)
* [Outputting AVM bytecode from CLI](/algokit/languages/python/lg-compile/#outputting-avm-bytecode-from-cli)
* [Obtaining bytecode within other contracts](/algokit/languages/python/lg-compile/#obtaining-bytecode-within-other-contracts)
* [Template variables](/algokit/languages/python/lg-compile/#template-variables)
* [CLI](/algokit/languages/python/lg-compile/#cli)
* [Within other contracts](/algokit/languages/python/lg-compile/#within-other-contracts)
* [Unsupported Python features](/algokit/languages/python/lg-unsupported-python-features/)
* [raise, try/except/finally](/algokit/languages/python/lg-unsupported-python-features/#raise-try-except-finally)
* [with](/algokit/languages/python/lg-unsupported-python-features/#with)
* [async](/algokit/languages/python/lg-unsupported-python-features/#async)
* [closures & lambdas](/algokit/languages/python/lg-unsupported-python-features/#closures-lambdas)
* [global keyword](/algokit/languages/python/lg-unsupported-python-features/#global-keyword)
* [Inheritance (outside of contract classes)](/algokit/languages/python/lg-unsupported-python-features/#inheritance-outside-of-contract-classes)
* [PuyaPy migration from 4.x to 5.0](/algokit/languages/python/lg-migration-4-5/)
* [`algopy.Array` to `algopy.ReferenceArray`](/algokit/languages/python/lg-migration-4-5/#algopy-array-to-algopy-referencearray)
* [`algopy.Account`, `algopy.Asset` and `algopy.Application` routing behaviour](/algokit/languages/python/lg-migration-4-5/#algopy-account-algopy-asset-and-algopy-application-routing-behaviour)
* [Constructor signatures of `ImmutableArray` and `ReferenceArray`](/algokit/languages/python/lg-migration-4-5/#constructor-signatures-of-immutablearray-and-referencearray)
# ARC-28: Structured event logging
[ARC-28](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0028.md) provides a methodology for structured logging by Algorand smart contracts. It introduces the concept of Events, where data contained in logs may be categorized and structured.
Each Event is identified by a unique 4-byte identifier derived from its `Event Signature`. The Event Signature is a UTF-8 string comprised of the event’s name, followed by the names of the [ARC-4](/algokit/languages/python/lg-arc4/) data types contained in the event, all enclosed in parentheses (`EventName(type1,type2,...)`) e.g.:
```default
Swapped(uint64,uint64)
```
Events are emitted by including them in the [log output](/algokit/languages/python/lg-logs/). The metadata that identifies the event should then be included in the ARC-4 contract output so that a calling client can parse the logs to parse the structured data out. This part of the ARC-28 spec isn’t yet implemented in Algorand Python, but it’s on the roadmap.
## Emitting Events
[Section titled “Emitting Events”](#emitting-events)
To emit an ARC-28 event in Algorand Python you can use the `emit` function, which appears in the `algopy.arc4` namespace for convenience since it heavily uses ARC-4 types and is essentially an extension of the ARC-4 specification. This function takes care of encoding the event payload to conform to the ARC-28 specification and there are 3 overloads:
* An [ARC-4 struct](/algokit/languages/python/lg-arc4/#structs), from which the name of the struct will be used as the event name and the struct parameters will be used as the event fields - `arc4.emit(Swapped(a, b))`
* An event signature as a [string literal (or module variable)](/algokit/languages/python/lg-types/), followed by the values - `arc4.emit("Swapped(uint64,uint64)", a, b)`
* An event name as a [string literal (or module variable)](/algokit/languages/python/lg-types/), followed by the values - `arc4.emit("Swapped", a, b)`
Here’s an example contract that emits events:
```python
from algopy import ARC4Contract, arc4
class Swapped(arc4.Struct):
a: arc4.UInt64
b: arc4.UInt64
class EventEmitter(ARC4Contract):
@arc4.abimethod
def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None:
arc4.emit(Swapped(b, a))
arc4.emit("Swapped(uint64,uint64)", b, a)
arc4.emit("Swapped", b, a)
```
It’s worth noting that the ARC-28 event signature needs to be known at compile time so the event name can’t be a dynamic type and must be a static string literal or string module constant. If you want to emit dynamic events you can do so using the [`log` method](/algokit/languages/python/lg-logs/), but you’d need to manually construct the correct series of bytes and the compiler won’t be able to emit the ARC-28 metadata so you’ll need to also manually parse the logs in your client.
Examples of manually constructing an event:
```python
# This is essentially what the `emit` method is doing, noting that a,b need to be encoded
# as a tuple so below (simple concat) only works for static ARC-4 types
log(arc4.arc4_signature("Swapped(uint64,uint64)"), a, b)
# or, if you wanted it to be truly dynamic for some reason,
# (noting this has a non-trivial opcode cost) and assuming in this case
# that `event_suffix` is already defined as a `String`:
event_name = String("Event") + event_suffix
event_selector = op.sha512_256((event_name + "(uint64)").bytes)[:4]
log(event_selector, UInt64(6))
```
# ARC-4: Application Binary Interface
[ARC-4](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md) defines a set of encodings and behaviors for authoring and interacting with an Algorand Smart Contract. It is not the only way to author a smart contract, but adhering to it will make it easier for other clients and users to interop with your contract.
To author an arc4 contract you should extend the `ARC4Contract` base class.
```python
from algopy import ARC4Contract
class HelloWorldContract(ARC4Contract):
...
```
## ARC-32 and ARC-56
[Section titled “ARC-32 and ARC-56”](#arc-32-and-arc-56)
[ARC-32](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0032.md) extends the concepts in ARC-4 to include an Application Specification which more holistically describes a smart contract and its associated state.
ARC-32/ARC-56 Application Specification files are automatically generated by the compiler for ARC-4 contracts as `.arc32.json` or `.arc56.json`
## Methods
[Section titled “Methods”](#methods)
Individual methods on a smart contract should be annotated with an `abimethod` decorator. This decorator is used to indicate a method which should be externally callable. The decorator itself includes properties to restrict when the method should be callable, for instance only when the application is being created or only when the OnComplete action is OptIn.
A method that should not be externally available should be annotated with a `subroutine` decorator.
Method docstrings will be used when outputting ARC-32 or ARC-56 application specifications, the following docstrings styles are supported ReST, Google, Numpydoc-style and Epydoc.
```python
from algopy import ARC4Contract, subroutine, arc4
class HelloWorldContract(ARC4Contract):
@arc4.abimethod(create=False, allow_actions=["NoOp", "OptIn"], name="external_name")
def hello(self, name: arc4.String) -> arc4.String:
return self.internal_method() + name
@subroutine
def internal_method(self) -> arc4.String:
return arc4.String("Hello, ")
```
## Router
[Section titled “Router”](#router)
Algorand Smart Contracts only have two possible programs that are invoked when making an ApplicationCall Transaction (`appl`). The “clear state” program which is called when using an OnComplete action of `ClearState` or the “approval” program which is called for all other OnComplete actions.
Routing is required to dispatch calls handled by the approval program to the relevant ABI methods. When extending `ARC4Contract`, the routing code is automatically generated for you by the PuyaPy compiler.
## Types
[Section titled “Types”](#types)
ARC-4 defines a number of [data types](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md#types) which can be used in an ARC-4 compatible contract and details how these types should be encoded in binary.
Algorand Python exposes these through a number of types which can be imported from the `algopy.arc4` module. These types represent binary encoded values following the rules prescribed in the ARC which can mean operations performed directly on these types are not as efficient as ones performed on natively supported types (such as `algopy.UInt64` or `algopy.Bytes`)
Where supported, the native equivalent of an ARC-4 type can be obtained via the `.native` property. It is possible to use native types in an ABI method and the router will automatically encode and decode these types to their ARC-4 equivalent.
### Booleans
[Section titled “Booleans”](#booleans)
**Type:** `algopy.arc4.Bool`\
\\\ **Encoding:** A single byte where the most significant bit is `1` for `True` and `0` for `False`\
\\\ **Native equivalent:** `builtins.bool`
### Unsigned ints
[Section titled “Unsigned ints”](#unsigned-ints)
**Types:** `algopy.arc4.UIntN` (<= 64 bits) `algopy.arc4.BigUIntN` (> 64 bits)\
\\\ **Encoding:** A big endian byte array of N bits\
\\\ **Native equivalent:** `algopy.UInt64` or `algopy.BigUInt`
Common bit sizes have also been aliased under `algopy.arc4.UInt8`, `algopy.arc4.UInt16` etc. A uint of any size between 8 and 512 bits (in intervals of 8bits) can be created using a generic parameter. It can be helpful to define your own alias for this type.
```python
import typing as t
from algopy import arc4
UInt40: t.TypeAlias = arc4.UIntN[t.Literal[40]]
```
### Unsigned fixed point decimals
[Section titled “Unsigned fixed point decimals”](#unsigned-fixed-point-decimals)
**Types:** `algopy.arc4.UFixedNxM` (<= 64 bits) `algopy.arc4.BigUFixedNxM` (> 64 bits)\
\\\ **Encoding:** A big endian byte array of N bits where `encoded_value = value / (10^M)`\
\\\ **Native equivalent:** *none*
```python
import typing as t
from algopy import arc4
Decimal: t.TypeAlias = arc4.UFixedNxM[t.Literal[64], t.Literal[10]]
```
### Bytes and strings
[Section titled “Bytes and strings”](#bytes-and-strings)
**Types:** `algopy.arc4.DynamicBytes` and `algopy.arc4.String`\
\\\ **Encoding:** A variable length byte array prefixed with a 16-bit big endian header indicating the length of the data\
\\\ **Native equivalent:** `algopy.Bytes` and `algopy.String`
Strings are assumed to be utf-8 encoded and the length of a string is the total number of bytes, *not the total number of characters*.
### Static arrays
[Section titled “Static arrays”](#static-arrays)
**Type:** `algopy.arc4.StaticArray`\
\\\ **Encoding:** See [ARC-4 Container Packing]()\
\\\ **Native equivalent:** *none*
An ARC-4 StaticArray is an array of a fixed size. The item type is specified by the first generic parameter and the size is specified by the second.
```python
import typing as t
from algopy import arc4
FourBytes: t.TypeAlias = arc4.StaticArray[arc4.Byte, t.Literal[4]]
```
### Address
[Section titled “Address”](#address)
**Type:** `algopy.arc4.Address`\
\\\ **Encoding:** A byte array 32 bytes long **Native equivalent:** [`algopy.Account`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Account)
Address represents an Algorand address’s public key, and can be used instead of `algopy.Account` when needing to reference an address in an ARC-4 struct, tuple or return type. It is a subclass of `arc4.StaticArray[arc4.Byte, typing.Literal[32]]`
### Dynamic arrays
[Section titled “Dynamic arrays”](#dynamic-arrays)
**Type:** `algopy.arc4.DynamicArray`\
\\\ **Encoding:** See [ARC-4 Container Packing]()\
\\\ **Native equivalent:** *none*
An ARC-4 DynamicArray is an array of a variable size. The item type is specified by the first generic parameter. Items can be added and removed via `.pop`, `.append`, and `.extend`.
The current length of the array is encoded in a 16-bit prefix similar to the `arc4.DynamicBytes` and `arc4.String` types
```python
import typing as t
from algopy import arc4
UInt64Array: t.TypeAlias = arc4.DynamicArray[arc4.UInt64]
```
### Tuples
[Section titled “Tuples”](#tuples)
**Type:** `algopy.arc4.Tuple`\
\\\ **Encoding:** See [ARC-4 Container Packing]()\
\\\ **Native equivalent:** `builtins.tuple`
ARC-4 Tuples are immutable statically sized arrays of mixed item types. Item types can be specified via generic parameters or inferred from constructor parameters.
### Structs
[Section titled “Structs”](#structs)
**Type:** `algopy.arc4.Struct`\
\\\ **Encoding:** See [ARC-4 Container Packing]()\
\\\ **Native equivalent:** `typing.NamedTuple`
ARC-4 Structs are named tuples. The class keyword `frozen` can be used to indicate if a struct can be mutated. Items can be accessed and mutated via names instead of indexes. Structs do not have a `.native` property, but a NamedTuple can be used in ABI methods and will be encoded/decoded to an ARC-4 struct automatically.
```python
import typing
from algopy import arc4
Decimal: typing.TypeAlias = arc4.UFixedNxM[typing.Literal[64], typing.Literal[9]]
class Vector(arc4.Struct, kw_only=True, frozen=True):
x: Decimal
y: Decimal
```
### ARC-4 Container Packing
[Section titled “ARC-4 Container Packing”](#arc-4-container-packing)
ARC-4 encoding rules are detailed explicitly in the [ARC](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md#encoding-rules). A summary is included here.
Containers are composed of a head and tail portion.
* For dynamic arrays, the head is prefixed with the length of the array encoded as a 16-bit number. This prefix is not included in offset calculation
* For fixed sized items (eg. Bool, UIntN, or a StaticArray of UIntN), the item is included in the head
* Consecutive Bool items are compressed into the minimum number of whole bytes possible by using a single bit to represent each Bool
* For variable sized items (eg. DynamicArray, String etc), a pointer is included to the head and the data is added to the tail. This pointer represents the offset from the start of the head to the start of the item data in the tail.
### Reference types
[Section titled “Reference types”](#reference-types)
**Types:** `algopy.Account`, `algopy.Application`, `algopy.Asset`, `algopy.gtxn.PaymentTransaction`, `algopy.gtxn.KeyRegistrationTransaction`, `algopy.gtxn.AssetConfigTransaction`, `algopy.gtxn.AssetTransferTransaction`, `algopy.gtxn.AssetFreezeTransaction`, `algopy.gtxn.ApplicationCallTransaction`
The ARC-4 specification allows for using a number of [reference types](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md#reference-types) in an ABI method signature where this reference type refers to…
* another transaction in the group
* an account in the accounts array (`apat` property of the transaction)
* an asset in the foreign assets array (`apas` property of the transaction)
* an application in the foreign apps array (`apfa` property of the transaction)
These types can only be used as parameters, and not as return types.
```python
from algopy import (
Account,
Application,
ARC4Contract,
Asset,
arc4,
gtxn,
)
class Reference(ARC4Contract):
@arc4.abimethod
def with_transactions(
self,
asset: Asset,
pay: gtxn.PaymentTransaction,
account: Account,
app: Application,
axfr: gtxn.AssetTransferTransaction
) -> None:
...
```
### Mutability
[Section titled “Mutability”](#mutability)
To ensure semantic compatibility the compiler will also check for any usages of mutable ARC-4 types (arrays and structs) and ensure that any additional references are copied using the `.copy()` method.
Python values are passed by reference, and when an object (eg. an array or struct) is mutated in one place, all references to that object see the mutated version. In Python this is managed via the heap. In Algorand Python these mutable values are instead stored on the stack, so when an additional reference is made (i.e. by assigning to another variable) a copy is added to the stack. Which means if one reference is mutated, the other references would not see the change. In order to keep the semantics the same, the compiler forces the addition of `.copy()` each time a new reference to the same object to match what will happen on the AVM.
Struct types can be indicated as `frozen` which will eliminate the need for a `.copy()` as long as the struct also contains no mutable fields (such as arrays or another mutable struct)
# Python builtins
Some common python builtins have equivalent `algopy` versions, that use an [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) instead of a native `int`.
## len
[Section titled “len”](#len)
The `len()` builtin is not supported. Instead, `algopy` types that have a length have a `.length` property of type [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64). This is primarily due to `len()` always returning `int` and the CPython implementation enforcing that it returns *exactly* `int`.
## range
[Section titled “range”](#range)
The `range()` builtin has an equivalent [`algopy.urange`](/docs/algorand-python/python/latest/api/api-algopy#algopy.urange). This behaves the same as the python builtin except that it returns an iteration of [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) values instead of `int`.
## enumerate
[Section titled “enumerate”](#enumerate)
The `enumerate()` builtin has an equivalent [`algopy.uenumerate`](/docs/algorand-python/python/latest/api/api-algopy#algopy.uenumerate). This behaves the same as the python builtin except that it returns an iteration of [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) index values and the corresponding item.
## reversed
[Section titled “reversed”](#reversed)
The `reversed()` builtin is supported when iterating within a `for` loop and behaves the same as the python builtin.
## types
[Section titled “types”](#types)
See [here](/algokit/languages/python/lg-types/#python-built-in-types)
# Calling other applications
The preferred way to call other smart contracts is using [`algopy.arc4.abi_call`](), [`algopy.arc4.arc4_create`]() or [`algopy.arc4.arc4_update`](). These methods support type checking and encoding of arguments, decoding of results, group transactions, and in the case of `arc4_create` and `arc4_update` automatic inclusion of approval and clear state programs.
## `algopy.arc4.abi_call`
[Section titled “algopy.arc4.abi\_call”](#algopyarc4abi_call)
[`algopy.arc4.abi_call`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.abi_call) can be used to call other ARC-4 contracts, the first argument should refer to an ARC-4 method either by referencing an Algorand Python [`algopy.arc4.ARC4Contract`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.ARC4Contract) method, an [`algopy.arc4.ARC4Client`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.ARC4Client) method generated from an ARC-32/ARC-56 app spec, or a string representing the ARC-4 method signature or name. The following arguments should then be the arguments required for the call, these arguments will be type checked and converted where appropriate. Any other related transaction parameters such as `app_id`, `fee` etc. can also be provided as keyword arguments.
If the ARC-4 method returns an ARC-4 result then the result will be a tuple of the ARC-4 result and the inner transaction. If the ARC-4 method does not return a result, or if the result type is not fully qualified then just the inner transaction is returned.
```python
from algopy import Application, ARC4Contract, String, arc4, subroutine
class HelloWorld(ARC4Contract):
@arc4.abimethod()
def greet(self, name: String) -> String:
return "Hello " + name
@subroutine
def call_existing_application(app: Application) -> None:
greeting, greet_txn = arc4.abi_call(HelloWorld.greet, "there", app_id=app)
assert greeting == "Hello there"
assert greet_txn.app_id == 1234
```
### Alternative ways to use `arc4.abi_call`
[Section titled “Alternative ways to use arc4.abi\_call”](#alternative-ways-to-use-arc4abi_call)
#### ARC4Client method
[Section titled “ARC4Client method”](#arc4client-method)
An ARC4Client client represents the ARC-4 abimethods of a smart contract and can be used to call abimethods in a type safe way
ARC4Client’s can be produced by using `puyapy --output-client=True` when compiling a smart contract (this would be useful if you wanted to publish a client for consumption by other smart contracts) An ARC4Client can also be generated from an ARC-32/ARC-56 application.json using `puyapy-clientgen` e.g. `puyapy-clientgen examples/hello_world_arc4/out/HelloWorldContract.arc56.json`, this would be the recommended approach for calling another smart contract that is not written in Algorand Python or does not provide the source
```python
from algopy import arc4, subroutine
class HelloWorldClient(arc4.ARC4Client):
def hello(self, name: arc4.String) -> arc4.String: ...
@subroutine
def call_another_contract() -> None:
# can reference another algopy contract method
result, txn = arc4.abi_call(HelloWorldClient.hello, arc4.String("World"), app=...)
assert result == "Hello, World"
```
#### Method signature or name
[Section titled “Method signature or name”](#method-signature-or-name)
An ARC-4 method selector can be used e.g. `"hello(string)string` along with a type index to specify the return type. Additionally just a name can be provided and the method signature will be inferred e.g.
```python
from algopy import arc4, subroutine
@subroutine
def call_another_contract() -> None:
# can reference a method selector
result, txn = arc4.abi_call[arc4.String](docs/_build/markdown/"hello(string)string", arc4.String("Algo"), app=...)
assert result == "Hello, Algo"
# can reference a method name, the method selector is inferred from arguments and return type
result, txn = arc4.abi_call[arc4.String](docs/_build/markdown/"hello", "There", app=...)
assert result == "Hello, There"
```
## `algopy.arc4.arc4_create`
[Section titled “algopy.arc4.arc4\_create”](#algopyarc4arc4_create)
[`algopy.arc4.arc4_create`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.arc4_create) can be used to create ARC-4 applications, and will automatically populate required fields for app creation (such as approval program, clear state program, and global/local state allocation).
Like [`algopy.arc4.abi_call`](/algokit/languages/python/lg-transactions/#create-an-arc-4-application-and-then-call-it) it handles ARC-4 arguments and provides ARC-4 return values.
If the compiled programs and state allocation fields need to be customized (for example due to [template variables](/algokit/languages/python/lg-compile/#within-other-contracts)), this can be done by passing a [`algopy.CompiledContract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.CompiledContract) via the `compiled` keyword argument.
```python
from algopy import ARC4Contract, String, arc4, subroutine
class HelloWorld(ARC4Contract):
@arc4.abimethod()
def greet(self, name: String) -> String:
return "Hello " + name
@subroutine
def create_new_application() -> None:
hello_world_app = arc4.arc4_create(HelloWorld).created_app
greeting, _txn = arc4.abi_call(HelloWorld.greet, "there", app_id=hello_world_app)
assert greeting == "Hello there"
```
## `algopy.arc4.arc4_update`
[Section titled “algopy.arc4.arc4\_update”](#algopyarc4arc4_update)
[`algopy.arc4.arc4_update`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.arc4_update) is used to update an existing ARC-4 application and will automatically populate the required approval and clear state program fields.
Like [`algopy.arc4.abi_call`](/algokit/languages/python/lg-transactions/#create-an-arc-4-application-and-then-call-it) it handles ARC-4 arguments and provides ARC-4 return values.
If the compiled programs need to be customized (for example due to [template variables](/algokit/languages/python/lg-compile/#within-other-contracts)), this can be done by passing a [`algopy.CompiledContract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.CompiledContract) via the `compiled` keyword argument.
```python
from algopy import Application, ARC4Contract, String, arc4, subroutine
class NewApp(ARC4Contract):
@arc4.abimethod()
def greet(self, name: String) -> String:
return "Hello " + name
@subroutine
def update_existing_application(existing_app: Application) -> None:
hello_world_app = arc4.arc4_update(NewApp, app_id=existing_app)
greeting, _txn = arc4.abi_call(NewApp.greet, "there", app_id=hello_world_app)
assert greeting == "Hello there"
```
## Using `itxn.ApplicationCall`
[Section titled “Using itxn.ApplicationCall”](#using-itxnapplicationcall)
If the application being called is not an ARC-4 contract, or an application specification is not available, then [`algopy.itxn.ApplicationCall`](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.ApplicationCall) can be used. This approach is generally more verbose than the above approaches, so should only be used if required. See [here](/algokit/languages/python/lg-transactions/#create-an-arc-4-application-and-then-call-it) for an example
# Compiling to AVM bytecode
The PuyaPy compiler can compile Algorand Python smart contracts directly into AVM bytecode. Once compiled, this bytecode can be utilized to construct AVM Application Call transactions both on and off chain.
## Outputting AVM bytecode from CLI
[Section titled “Outputting AVM bytecode from CLI”](#outputting-avm-bytecode-from-cli)
The `--output-bytecode` option can be used to generate `.bin` files for smart contracts and logic signatures, producing an approval and clear program for each smart contract.
## Obtaining bytecode within other contracts
[Section titled “Obtaining bytecode within other contracts”](#obtaining-bytecode-within-other-contracts)
The [`compile_contract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_contract) function takes an Algorand Python smart contract class and returns a [`CompiledContract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.CompiledContract), The global state, local state and program pages allocation parameters are derived from the contract by default, but can be overridden. This compiled contract can then be used to create an [`algopy.itxn.ApplicationCall`](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.ApplicationCall) transaction or used with the [ARC-4](/algokit/languages/python/lg-calling-apps/) functions.
The [`compile_logicsig`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_logicsig) takes an Algorand Python logic signature and returns a [`CompiledLogicSig`](/docs/algorand-python/python/latest/api/api-algopy#algopy.CompiledLogicSig), which can be used to verify if a transaction has been signed by a particular logic signature.
## Template variables
[Section titled “Template variables”](#template-variables)
Algorand Python supports defining [`algopy.TemplateVar`](/docs/algorand-python/python/latest/api/api-algopy#algopy.TemplateVar) variables that can be substituted during compilation.
For example, the following contract has `UInt64` and `Bytes` template variables.
```python
from algopy import ARC4Contract, Bytes, TemplateVar, UInt64, arc4
class TemplatedContract(ARC4Contract):
@arc4.abimethod
def my_method(self) -> UInt64:
return TemplateVar[UInt64](docs/_build/markdown/"SOME_UINT")
@arc4.abimethod
def my_other_method(self) -> Bytes:
return TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES")
```
When compiling to bytecode, the values for these template variables must be provided. These values can be provided via the CLI, or through the `template_vars` parameter of the [`compile_contract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_contract) and [`compile_logicsig`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_logicsig) functions.
### CLI
[Section titled “CLI”](#cli)
The `--template-var` option can be used to [define](/algokit/languages/python/compiler/#defining-template-values) each variable.
For example to provide the values for the above example contract the following command could be used `puyapy --template-var SOME_UINT=123 --template-var SOME_BYTES=0xABCD templated_contract.py`
### Within other contracts
[Section titled “Within other contracts”](#within-other-contracts)
The functions [`compile_contract`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_contract) and [`compile_logicsig`](/docs/algorand-python/python/latest/api/api-algopy#algopy.compile_logicsig) both have an optional `template_vars` parameter which can be used to define template variables. Variables defined in this manner take priority over variables defined on the CLI.
```python
from algopy import Bytes, UInt64, arc4, compile_contract, subroutine
from templated_contract import TemplatedContract
@subroutine
def create_templated_contract() -> None:
compiled = compile_contract(
TemplatedContract,
global_uints=2, # customize allocated global uints
template_vars={ # provide template vars
"SOME_UINT": UInt64(123),
"SOME_BYTES": Bytes(b"\xAB\xCD")
},
)
arc4.arc4_create(TemplatedContract, compiled=compiled)
```
# Control flow structures
Control flow in Algorand Python is similar to standard Python control flow, with support for if statements, while loops, for loops, and match statements.
## If statements
[Section titled “If statements”](#if-statements)
If statements work the same as Python. The conditions must be an expression that evaluates to bool, which can include a [String or Uint64](/algokit/languages/python/lg-types/) among others.
```python
if condition:
# block of code to execute if condition is True
elif condition2:
# block of code to execute if condition is False and condition2 is True
else:
# block of code to execute if condition and condition2 are both False
```
[See full example](https://github.com/algorandfoundation/puya/blob/main/test_cases/simplish/contract.py).
## Ternary conditions
[Section titled “Ternary conditions”](#ternary-conditions)
Ternary conditions work the same as Python. The condition must be an expression that evaluates to bool, which can include a [String or Uint64](/algokit/languages/python/lg-types/) among others.
```python
value1 = UInt64(5)
value2 = String(">6") if value1 > 6 else String("<=6")
```
## While loops
[Section titled “While loops”](#while-loops)
While loops work the same as Python. The condition must be an expression that evaluates to bool, which can include a [String or Uint64](/algokit/languages/python/lg-types/) among others.
You can use `break` and `continue`.
```python
while condition:
# block of code to execute if condition is True
```
[See full example](https://github.com/algorandfoundation/puya/blob/main/test_cases/unssa/contract.py#L32-L83).
## For Loops
[Section titled “For Loops”](#for-loops)
For loops are used to iterate over sequences, ranges and [ARC-4 arrays](/algokit/languages/python/lg-arc4/). They work the same as Python.
Algorand Python provides functions like `uenumerate` and `urange` to facilitate creating sequences and ranges; in-built Python `reversed` method works with these.
* `uenumerate` is similar to Python’s built-in enumerate function, but for UInt64 numbers; it allows you to loop over a sequence and have an automatic counter.
* `urange` is a function that generates a sequence of Uint64 numbers, which you can iterate over.
* `reversed` returns a reversed iterator of a sequence.
Here is an example of how you can use these functions in a contract:
```python
test_array = arc4.StaticArray(arc4.UInt8(), arc4.UInt8(), arc4.UInt8(), arc4.UInt8())
# urange: reversed items, forward index
for index, item in uenumerate(reversed(urange(4))):
test_array[index] = arc4.UInt8(item)
assert test_array.bytes == Bytes.from_hex("03020100")
```
[See full examples](https://github.com/algorandfoundation/puya/blob/main/test_cases/nested_loops/contract.py).
## Match Statements
[Section titled “Match Statements”](#match-statements)
Match statements work the same as Python with support for basic case/switch functionality. Captures and patterns are not supported. Pattern matching and guard clauses are also not supported currently.
```python
match value:
case pattern1:
# block of code to execute if pattern1 matches
case pattern2:
# block of code to execute if pattern2 matches
case _:
# Fallback
```
[See full example](https://github.com/algorandfoundation/puya/blob/main/test_cases/match/contract.py).
# Data structures
In terms of data structures, Algorand Python currently provides support for [composite](https://en.wikipedia.org/wiki/Composite_data_type) data types and arrays.
In a restricted and costly computing environment such as a blockchain application, making the correct choice for data structures is crucial.
All ARC-4 data types are supported, and initially were the only choice of data structures in Algorand Python 1.0, other than statically sized native Python tuples. However, ARC-4 encoding is not an efficient encoding for mutations, additionally they were restricted in that they could only contain other ARC-4 types.
As of Algorand Python 2.7, two new array types were introduced [`algopy.Array`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Array), a mutable array type that supports statically sized native and ARC-4 elements and [`algopy.ImmutableArray`](/docs/algorand-python/python/latest/api/api-algopy#algopy.ImmutableArray) that has an immutable API and supports dynamically sized native and ARC-4 elements.
## Mutability vs Immutability
[Section titled “Mutability vs Immutability”](#mutability-vs-immutability)
A value with an immutable type cannot be modified. Some examples are [`UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64), [`Bytes`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Bytes), `tuple` and `typing.NamedTuple`.
Aggregate immutable types such as `tuple` or `ImmutableArray` provide a way to produce modified values, this is done by returning a copy of the original value with the specified changes applied e.g.
```python
import typing
import algopy
# update a named tuple with _replace
class MyTuple(typing.NamedTuple):
foo: algopy.UInt64
bar: algopy.String
tup1 = MyTuple(foo=algopy.UInt64(12), bar=algopy.String("Hello"))
# this does not modify tup1
tup2 = tup1._replace(foo=algopy.UInt64(34))
assert tup1.foo != tup2.foo
# update immutable array by appending and reassigning
arr = algopy.ImmutableArray[MyTuple]()
arr = arr.append(tup1)
arr = arr.append(tup2)
```
Mutable types allow direct modification of a value and all references to this value are able to observe the change e.g.
```python
import algopy
# both my_arr and my_arr2 both point to the same array
my_arr = algopy.Array[algopy.UInt64]()
my_arr2 = my_arr
my_arr.append(algopy.UInt64(12))
assert my_arr.length == 1
assert my_arr2.length == 1
my_arr2.append(algopy.UInt64(34))
assert my_arr2.length == 2
assert my_arr.length == 2
```
## Static size vs Dynamic size
[Section titled “Static size vs Dynamic size”](#static-size-vs-dynamic-size)
A static sized type is a type where its total size in memory is determinable at compile time, for example `UInt64` is always 8 bytes of memory. Aggregate types such as `tuple`, `typing.NamedTuple`, `arc4.Struct` and `arc4.Tuple` are static size if all their members are also static size e.g. `tuple[UInt64, UInt64]` is static size as it contains two static sized members.
Any type where its size is not statically defined is dynamically sized e.g. `Bytes`, `String`, `tuple[UInt64, String]` and `Array[UInt64]` are all dynamically sized.
## Size constraints
[Section titled “Size constraints”](#size-constraints)
All `bytes` on the AVM stack cannot exceed 4096 bytes in length, this means all arrays and structs cannot exceed this size. Boxes are an exception to this, the contents of a box can be up to 32k bytes. However loading this entire box into a variable is not possible as it would exceed the AVM limit of 4096 bytes. However Puya will support reading and writing parts of a box
```python
import typing
from algopy import Box, FixedArray, Struct, UInt64, arc4, size_of
class BigStruct(Struct):
count: UInt64 # 8 bytes
large_array: FixedArray[UInt64, typing.Literal[512]] # 4096 bytes
class Contract(arc4.ARC4Contract):
def __init__(self) -> None:
self.box = Box(BigStruct)
self.box.create()
@arc4.abimethod()
def read_box_fails(self) -> UInt64:
assert size_of(BigStruct) == 4104
big_struct = self.box.value # this fails to compile because size_of(BigStruct)
assert big_struct.count > 0, ""
```
## Algorand Python composite types
[Section titled “Algorand Python composite types”](#algorand-python-composite-types)
### `tuple`
[Section titled “tuple”](#tuple)
This is a regular python tuple
* Immutable
* Members can be of any type
* Most useful as an anonymous type
* Each member is stored on the stack, within a function this makes them quite efficient. However when passing to another function they can require a lot of stack manipulations to order all the members correctly on the stack
### `typing.NamedTuple`
[Section titled “typing.NamedTuple”](#typingnamedtuple)
* Immutable
* Members can be of any type
* Members are described by a field name and type
* Modified copies can be made using `._replace`
* Each member is stored on the stack, within a function this makes them quite efficient. However when passing to another function they can require a lot of stack manipulations to order all the members correctly on the stack
### `Struct`
[Section titled “Struct”](#struct)
* Can contain any type except transactions
* Members are described by a field name and type
* Can be immutable if using the `frozen` class option and all members are also immutable
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Struct.copy) when mutable and creating additional references
* Encoded as a single ARC-4 value on the stack
### `arc4.Tuple`
[Section titled “arc4.Tuple”](#arc4tuple)
* Can only contain other ARC-4 types
* Can be immutable if all members are also immutable
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.Tuple.copy) when mutable and creating additional references
* Encoded as a single ARC-4 value on the stack
### `arc4.Struct`
[Section titled “arc4.Struct”](#arc4struct)
* Can only contain other ARC-4 types
* Members are described by a field name and type
* Can be immutable if using the `frozen` class option and all members are also immutable
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.Struct.copy) when mutable and creating additional references
* Encoded as a single ARC-4 value on the stack
## Algorand Python array types
[Section titled “Algorand Python array types”](#algorand-python-array-types)
### `algopy.FixedArray`
[Section titled “algopy.FixedArray”](#algopyfixedarray)
* Can contain any type except transactions
* Can only contain a fixed number of elements
* Most efficient array type
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy#algopy.FixedArray.copy) if making additional references to the array or any mutable elements
### `algopy.Array`
[Section titled “algopy.Array”](#algopyarray)
* Can contain any type except transactions
* Dynamically sized, efficient for reading (when assembled off-chain). Inefficient to manipulate on-chain
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Array.copy) if making additional references to the array or any mutable elements
### `algopy.ReferenceArray`
[Section titled “algopy.ReferenceArray”](#algopyreferencearray)
* Mutable, all references see modifications
* Only supports static size immutable types. Note: Supporting mutable elements would have the potential to quickly exhaust scratch slots in a program so for this reason this type is limited to immutable elements only
* May use scratch slots to store the data
* Cannot be put in storage or used in ABI method signatures
* An immutable copy can be made for storage or returning from a contract by using the [`freeze`](/docs/algorand-python/python/latest/api/api-algopy#algopy.ReferenceArray.freeze) method e.g.
```python
import algopy
class SomeContract(algopy.arc4.ARC4Contract):
@algopy.arc4.abimethod()
def get_array(self) -> algopy.ImmutableArray[algopy.UInt64]:
arr = algopy.ReferenceArray[algopy.UInt64]()
# modify arr as required
...
# return immutable copy
return arr.freeze()
```
### `algopy.ImmutableArray`
[Section titled “algopy.ImmutableArray”](#algopyimmutablearray)
* Immutable version of `algopy.Array`
* Modifications are done by reassigning a modified copy of the original array
* Can only contain immutable types
* Can be put in storage or used in ABI method signatures
### `algopy.arc4.DynamicArray` / `algopy.arc4.StaticArray`
[Section titled “algopy.arc4.DynamicArray / algopy.arc4.StaticArray”](#algopyarc4dynamicarray--algopyarc4staticarray)
* Only supports ARC-4 elements
* Elements often require conversion to native types, use `algopy.Array` / `algopy.FixedArray` to avoid explict conversions
* Dynamically sized types are efficient for reading, but not writing
* Requires [`.copy()`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.DynamicArray) if making additional references to the array or mutable elements
## Tips
[Section titled “Tips”](#tips)
* Avoid using dynamically sized types as they are less efficient and can obfuscate constraints of the AVM (`algopy.Bytes`, `algopy.String`, `algopy.Array`, `algopy.arc4.DynamicArray`, `algopy.arc4.DynamicBytes` `algopy.arc4.String`)
* Prefer frozen structs where possible to avoid `.copy()` requirements
* If a function needs just a few values of a tuple it is more efficient to just pass those members rather than the whole tuple
* For passing composite values between functions there can be different trade-offs in terms of op budget and program size between a tuple or a struct, if this is a concern then test and confirm which suits your contract the best.
* All array types except `algopy.ReferenceArray` can be used in storage and ABI methods, and will be viewed externally (i.e. in ARC-56 definitions) as the equivalent ARC-4 encoded type
* Use [`algopy.ReferenceArray.freeze`](/docs/algorand-python/python/latest/api/api-algopy#algopy.ReferenceArray.freeze) to convert the array to an `algopy.ImmutableArray` for storage
# Error handling and assertions
In Algorand Python, error handling and assertions play a crucial role in ensuring the correctness and robustness of smart contracts.
## Assertions
[Section titled “Assertions”](#assertions)
Assertions allow you to immediately fail a smart contract if a [Boolean statement or value](/algokit/languages/python/lg-types/#bool) evaluates to `False`. If an assertion fails, it immediately stops the execution of the contract and marks the call as a failure.
In Algorand Python, you can use the Python built-in `assert` statement to make assertions in your code.
For example:
```python
@subroutine
def set_value(value: UInt64):
assert value > 4, "Value must be > 4"
```
### Assertion error handling
[Section titled “Assertion error handling”](#assertion-error-handling)
The (optional) string value provided with an assertion, if provided, will be added as a TEAL comment on the end of the assertion line. This works in concert with default AlgoKit Utils app client behaviour to show a TEAL stack trace of an error and thus show the error message to the caller (when source maps have been loaded).
## Explicit failure
[Section titled “Explicit failure”](#explicit-failure)
For scenarios where you need to fail a contract explicitly, you can use the [`op.err()`](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.err) operation. This operation causes the TEAL program to immediately and unconditionally fail.
Alternatively [`op.exit(0)`](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.exit) will achieve the same result. A non-zero value will do the opposite and immediately succeed.
## Exception handling
[Section titled “Exception handling”](#exception-handling)
The AVM doesn’t provide error trapping semantics so it’s not possible to implement `raise` and `catch`.
For more details see [Unsupported Python features](/algokit/languages/python/lg-unsupported-python-features/#raise-try-except-finally).
# Logging
Algorand Python provides a [`log` method](/docs/algorand-python/python/latest/api/api-algopy#algopy.log) that allows you to emit debugging and event information as well as return values from your contracts to the caller.
This `log` method is a superset of the [AVM `log` method](/algokit/languages/python/lg-ops/) that adds extra functionality:
* You can log multiple items rather than a single item
* Items are concatenated together with an optional separator (which defaults to: `""`)
* Items are automatically converted to bytes for you
* Support for:
* `int` literals / module variables (encoded as raw bytes, not ASCII)
* `UInt64` values (encoded as raw bytes, not ASCII)
* `str` literals / module variables (encoded as UTF-8)
* `bytes` literals / module variables (encoded as is)
* `Bytes` values (encoded as is)
* `BytesBacked` values, which includes [`String`](/docs/algorand-python/python/latest/api/api-algopy#algopy.String), [`BigUInt`](/docs/algorand-python/python/latest/api/api-algopy#algopy.BigUInt), [`Account`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Account) and all of the [ARC-4 types](/docs/algorand-python/python/latest/api/api-algopy.arc4) (encoded as their underlying bytes values)
Logged values are [available to the calling client](https://dev.algorand.co/reference/rest-api/algod/#pendingtransactionresponse) and attached to the transaction record stored on the blockchain ledger.
If you want to emit ARC-28 events in the logs then there is a [purpose-built function for that](/algokit/languages/python/lg-arc28/).
Here’s an example contract that uses the log method in various ways:
```python
from algopy import BigUInt, Bytes, Contract, log, op
class MyContract(Contract):
def approval_program(self) -> bool:
log(0)
log(b"1")
log("2")
log(op.Txn.num_app_args + 3)
log(Bytes(b"4") if op.Txn.num_app_args else Bytes())
log(
b"5",
6,
op.Txn.num_app_args + 7,
BigUInt(8),
Bytes(b"9") if op.Txn.num_app_args else Bytes(),
sep="_",
)
return True
def clear_state_program(self) -> bool:
return True
```
# PuyaPy migration from 4.x to 5.0
PuyaPy 5.0 and the accompanying Algorand Python 3.0 `algopy` stubs have some breaking changes from prior versions, this document outlines those changes and how to resolve them
## `algopy.Array` to `algopy.ReferenceArray`
[Section titled “algopy.Array to algopy.ReferenceArray”](#algopyarray-to-algopyreferencearray)
The `algopy.Array` type present in 4.x has been renamed to `algopy.ReferenceArray` to make it clearer how it differs from the other array types. If a contract was using this type in 4.x it could encounter one of the following errors after upgrading to 5.0
* `No overload variant of "Array" matches argument types`
* `expression is not valid as an assignment target`
* `unsupported assignment target`
* `mutable values cannot be passed more than once to a subroutine`
A simple way to solve this for existing contracts using the old name is to alias the `ReferenceArray` type as `Array`.
e.g.
```python
from algopy import ReferenceArray as Array
```
If you need to use both `algopy.ReferenceArray` and the new `algopy.Array` then it would be better to update existing `algopy.Array` references to `algopy.ReferenceArray`
e.g. code that was using `algopy.Array` prior to 5.0
```python
from algopy import *
@subroutine
def some_method(arr: Array[UInt64]) -> None: ...
```
After migrating to 5.0, should use `algopy.ReferenceArray` for existing code, and is free to use `algopy.Array` for new code
```python
from algopy import *
@subroutine
def some_method(arr: ReferenceArray[UInt64]) -> None: ...
@subroutine
def a_new_method(arr: Array[UInt64]) -> None:
```
## `algopy.Account`, `algopy.Asset` and `algopy.Application` routing behaviour
[Section titled “algopy.Account, algopy.Asset and algopy.Application routing behaviour”](#algopyaccount-algopyasset-and-algopyapplication-routing-behaviour)
The default routing behaviour for resources types `algopy.Account`, `algopy.Asset` and `algopy.Application` has changed in 5.0, the new behaviour will treat these types as their underlying ARC-4 value type when constructing ABI method signatures. This allows for more efficient resource packing when using the `algokit_utils` populate resource functionality.
| Type | 4.x (`foreign_index`) | 5.0 (`value`) |
| -------------------- | --------------------- | ------------- |
| `algopy.Account` | `account` | `address` |
| `algopy.Asset` | `asset` | `uint64` |
| `algopy.Application` | `application` | `uint64` |
There are two methods to return to the 4.x behaviour for these types:
1.) Use original behaviour for entire compilation by using CLI options
The original behaviour can be restored by using the `--resource-encoding` CLI option on `puyapy`
e.g. `puyapy --resource-encoding=index path/to/contracts`
2.) Use original behaviour for specific methods by using an `abimethod` option Individual methods can be forced to use the original behaviour by setting the `resource_encoding` option on `arc4.abimethod` e.g.
```python
from algopy import arc4, Account, Application, Asset
class MyContract(arc4.ARC4Contract):
@arc4.abimethod(resource_encoding="index")
def my_abi_method(self, app: Application, asset: Asset, account: Account) -> None:
...
# has an ARC-4 signature of my_abi_method(application,asset,account)void
```
## Constructor signatures of `ImmutableArray` and `ReferenceArray`
[Section titled “Constructor signatures of ImmutableArray and ReferenceArray”](#constructor-signatures-of-immutablearray-and-referencearray)
With the introduction of the new Mutable Native Arrays (`Array`, `FixedArray`) to PuyaPy we chose to follow standard Python idioms, in that these arrays can be initialized with an iterable (tuple, another array e.g. `Array((UInt64(1), UInt64(2)))`, `Array(existing_arr)`).
The constructor signatures of `ImmutableArray` and `ReferenceArray` (which is called `Array` prior to 5.0, see [above]()) has been changed to be consistent with the new Mutable Native Arrays.
e.g. code that constructs `algopy.ImmutableArray` prior to 5.0
```python
arr = ImmutableArray(UInt64(1), UInt64(2), UInt64(3))
```
After migrating to 5.0, construct `algopy.ImmutableArray` using an iterable parameter
```python
arr = ImmutableArray((UInt64(1), UInt64(2), UInt64(3)))
```
# Module level constructs
You can write compile-time constant code at a module level and then use them in place of [Python built-in literal types](/algokit/languages/python/lg-types/#python-built-in-types).
For a full example of what syntax is currently possible see the [test case example](https://github.com/algorandfoundation/puya/blob/main/test_cases/module_consts/contract.py).
## Module constants
[Section titled “Module constants”](#module-constants)
Module constants are compile-time constant, and can contain `bool`, `int`, `str` and `bytes`.
You can use fstrings and other compile-time constant values in module constants too.
For example:
```python
from algopy import UInt64, subroutine
SCALE = 100000
SCALED_PI = 314159
@subroutine
def circle_area(radius: UInt64) -> UInt64:
scaled_result = SCALED_PI * radius**2
result = scaled_result // SCALE
return result
@subroutine
def circle_area_100() -> UInt64:
return circle_area(UInt64(100))
```
## If statements
[Section titled “If statements”](#if-statements)
You can use if statements with compile-time constants in module constants.
For example:
```python
FOO = 42
if FOO > 12:
BAR = 123
else:
BAR = 456
```
## Integer math
[Section titled “Integer math”](#integer-math)
Module constants can also be defined using common integer expressions.
For example:
```python
SEVEN = 7
TEN = 7 + 3
FORTY_NINE = 7 ** 2
```
## Strings
[Section titled “Strings”](#strings)
Module `str` constants can use f-string formatting and other common string expressions.
For example:
```python
NAME = "There"
MY_FORMATTED_STRING = f"Hello {NAME}" # Hello There
PADDED = f"{123:05}" # "00123"
DUPLICATED = "5" * 3 # "555"
```
## Type aliases
[Section titled “Type aliases”](#type-aliases)
You can create type aliases to make your contract terser and more expressive.
For example:
```python
import typing
from algopy import arc4
VoteIndexArray: typing.TypeAlias = arc4.DynamicArray[arc4.UInt8]
Row: typing.TypeAlias = arc4.StaticArray[arc4.UInt8, typing.Literal[3]]
Game: typing.TypeAlias = arc4.StaticArray[Row, typing.Literal[3]]
Move: typing.TypeAlias = tuple[arc4.UInt64, arc4.UInt64]
Bytes32: typing.TypeAlias = arc4.StaticArray[arc4.Byte, typing.Literal[32]]
Proof: typing.TypeAlias = arc4.DynamicArray[Bytes32]
```
# Opcode budgets
Algorand Python provides a helper method for increasing the [available opcode budget](https://dev.algorand.co/concepts/smart-contracts/languages/teal/#dynamic-operational-cost), see [`algopy.ensure_budget`](/docs/algorand-python/python/latest/api/api-algopy#algopy.ensure_budget).
# AVM operations
Algorand Python allows you to do express [every op code the AVM has available](https://dev.algorand.co/concepts/smart-contracts/avm/#operations) apart from ops that manipulate the stack (to avoid conflicts with the compiler), and `log` (to avoid confusion with the superior [Algorand Python log function](/algokit/languages/python/lg-logs/)). These ops are exposed via the [`algopy.op`](/docs/algorand-python/python/latest/api/api-algopy.op#module-algopy.op) submodule. We generally recommend importing this entire submodule so you can use intellisense to discover the available methods:
```python
from algopy import UInt64, op, subroutine
@subroutine
def sqrt_16() -> UInt64:
return op.sqrt(16)
```
All ops are typed using Algorand Python types and have correct static type representations.
Many ops have higher-order functionality that Algorand Python exposes and would limit the need to reach for the underlying ops. For instance, there is first-class support for local and global storage so there is little need to use the likes of `app_local_get` et. al. But they are still exposed just in case you want to do something that Algorand Python’s abstractions don’t support.
## Txn
[Section titled “Txn”](#txn)
The `Txn` opcodes are so commonly used they have been exposed directly in the `algopy` module and can be easily imported to make it terser to access:
```python
from algopy import subroutine, Txn
@subroutine
def has_no_app_args() -> bool:
return Txn.num_app_args == 0
```
## Global
[Section titled “Global”](#global)
The `Global` opcodes are so commonly used they have been exposed directly in the `algopy` module and can be easily imported to make it terser to access:
```python
from algopy import subroutine, Global, Txn
@subroutine
def only_allow_creator() -> None:
assert Txn.sender == Global.creator_address, "Only the contract creator can perform this operation"
```
# Storing data on-chain
Algorand smart contracts can utilise [three different types of on-chain storage](https://dev.algorand.co/concepts/smart-contracts/storage/overview): [Global storage](), [Local storage](), and [Box Storage](). They also have access to a transient form of storage in [Scratch space]().
The life-cycle of a smart contract matches the semantics of Python classes when you consider deploying a smart contract as “instantiating” the class. Any calls to that smart contract are made to that instance of the smart contract, and any state assigned to `self.` variables will persist across different invocations (provided the transaction it was a part of succeeds, of course). You can deploy the same contract class multiple times, each will become a distinct and isolated instance.
During a single smart contract execution there is also the ability to use “temporary” storage either global to the contract execution via [Scratch storage](), or local to the current method via [local variables and subroutine params](/algokit/languages/python/lg-structure/#subroutines).
## Global storage
[Section titled “Global storage”](#global-storage)
Global storage is state that is stored against the contract instance and can be retrieved by key. There are [AVM limits to the amount of global storage that can be allocated to a contract](https://dev.algorand.co/concepts/smart-contracts/storage/overview/#global-storage).
This is represented in Algorand Python by either:
1. Assigning any [Algorand Python typed](/algokit/languages/python/lg-types/) value to an instance variable (e.g. `self.value = UInt64(3)`).
* Use this approach if you just require a terse API for getting and setting a state value
2. Using an instance of `GlobalState`, which gives [some extra features](/docs/algorand-python/python/latest/api/api-algopy#algopy.GlobalState) to understand and control the value and the metadata of it (which propagates to the ARC-32/ARC-56 app spec file)
* Use this approach if you need to:
* Omit a default/initial value
* Delete the stored value
* Check if a value exists
* Specify the exact key bytes
* Include a description to be included in App Spec files (ARC-32/ARC-56)
For example:
```python
self.global_int_full = GlobalState(UInt64(55), key="gif", description="Global int full")
self.global_int_simplified = UInt64(33)
self.global_int_no_default = GlobalState(UInt64)
self.global_bytes_full = GlobalState(Bytes(b"Hello"))
self.global_bytes_simplified = Bytes(b"Hello")
self.global_bytes_no_default = GlobalState(Bytes)
global_int_full_set = bool(self.global_int_full)
bytes_with_default_specified = self.global_bytes_no_default.get(b"Default if no value set")
error_if_not_set = self.global_int_no_default.value
```
These values can be assigned anywhere you have access to `self` i.e. any instance methods/subroutines. The information about global storage is automatically included in the ARC-32/ARC-56 app spec file and thus will automatically appear within any [generated typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients).
## Local storage
[Section titled “Local storage”](#local-storage)
Local storage is state that is stored against the contract instance for a specific account and can be retrieved by key and account address. There are [AVM limits to the amount of local storage that can be allocated to a contract](https://dev.algorand.co/concepts/smart-contracts/storage/overview/#local-storage).
This is represented in Algorand Python by using an instance of [`LocalState`](/docs/algorand-python/python/latest/api/api-algopy#algopy.LocalState).
For example:
```python
def __init__(self) -> None:
self.local = LocalState(Bytes)
self.local_with_metadata = LocalState(UInt64, key = "lwm", description = "Local with metadata")
@subroutine
def get_guaranteed_data(self, for_account: Account) -> Bytes:
return self.local[for_account]
@subroutine
def get_data_with_default(self, for_account: Account, default: Bytes) -> Bytes:
return self.local.get(for_account, default)
@subroutine
def get_data_or_assert(self, for_account: Account) -> Bytes:
result, exists = self.local.maybe(for_account)
assert exists, "no data for account"
return result
@subroutine
def set_data(self, for_account: Account, value: Bytes) -> None:
self.local[for_account] = value
@subroutine
def delete_data(self, for_account: Account) -> None:
del self.local[for_account]
```
These values can be assigned anywhere you have access to `self` i.e. any instance methods/subroutines. The information about local storage is automatically included in the ARC-32/ARC-56 app spec file and thus will automatically appear within any [generated typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients).
## Box storage
[Section titled “Box storage”](#box-storage)
We provide two different types for accessing box storage: [Box](/docs/algorand-python/python/latest/api/api-algopy#algopy.Box), and [BoxMap](/docs/algorand-python/python/latest/api/api-algopy#algopy.BoxMap). We also expose raw operations via the [AVM ops](/algokit/languages/python/lg-ops/) module.
Before using box storage, be sure to familiarise yourself with the [requirements and restrictions](https://dev.algorand.co/concepts/smart-contracts/storage/overview/#boxes) of the underlying API.
The `Box` type provides an abstraction over storing a single value in a single box. A box can be declared against `self` in an `__init__` method (in which case the key must be a compile time constant); or as a local variable within any subroutine. `Box` proxy instances can be passed around like any other value.
Once declared, you can interact with the box via its instance methods.
```python
import typing as t
from algopy import Box, arc4, Contract, op
class MyContract(Contract):
def __init__(self) -> None:
self.box_a = Box(arc4.StaticArray[arc4.UInt32, t.Literal[20]], key=b"a")
def approval_program(self) -> bool:
box_b = Box(arc4.String, key=b"b")
box_b.value = arc4.String("Hello")
# Check if the box exists
if self.box_a:
# Reassign the value
self.box_a.value[2] = arc4.UInt32(40)
else:
# Assign a new value
self.box_a.value = arc4.StaticArray[arc4.UInt32, t.Literal[20]].from_bytes(op.bzero(20 * 4))
# Read a value
return self.box_a.value[4] == arc4.UInt32(2)
```
In addition to being able to set and read the box value, there are operations for extracting and replacing just a portion of the box data which is useful for minimizing the amount of reads and writes required, but also allows you to interact with byte arrays which are longer than the AVM can support (currently 4096).
```python
from algopy import Box, Contract, Global, Txn
class MyContract(Contract):
def approval_program(self) -> bool:
my_blob = Box(Bytes, key=b"blob")
sender_bytes = Txn.sender.bytes
app_address = Global.current_application_address.bytes
assert my_blob.create(size=8000)
my_blob.replace(0, sender_bytes)
my_blob.splice(0, 0, app_address)
first_64 = my_blob.extract(0, 32 * 2)
assert first_64 == app_address + sender_bytes
value, exists = my_blob.maybe()
assert exists
del my_blob.value
value, exists = my_blob.maybe()
assert not exists
assert my_blob.get(default=sender_bytes) == sender_bytes
my_blob.create(size=sender_bytes + app_address)
assert my_blob, "Blob exists"
assert my_blob.length == 64
return True
```
`BoxMap` is similar to the `Box` type, but allows for grouping a set of boxes with a common key and content type. A custom `key_prefix` can optionally be provided, with the default being to use the variable name as the prefix. The key can be a `Bytes` value, or anything that can be converted to `Bytes`. The final box name is the combination of `key_prefix + key`.
```python
from algopy import BoxMap, Contract, Account, Txn, String
class MyContract(Contract):
def __init__(self) -> None:
self.my_map = BoxMap(Account, String, key_prefix=b"a_")
def approval_program(self) -> bool:
# Check if the box exists
if Txn.sender in self.my_map:
# Reassign the value
self.my_map[Txn.sender] = String(" World")
else:
# Assign a new value
self.my_map[Txn.sender] = String("Hello")
# Read a value
return self.my_map[Txn.sender] == String("Hello World")
```
If none of these abstractions suit your needs, you can use the box storage [AVM ops](/algokit/languages/python/lg-ops/) to interact with box storage. These ops match closely to the opcodes available on the AVM.
For example:
```python
op.Box.create(b"key", size)
op.Box.put(Txn.sender.bytes, answer_ids.bytes)
(votes, exists) = op.Box.get(Txn.sender.bytes)
op.Box.replace(TALLY_BOX_KEY, index, op.itob(current_vote + 1))
```
See the [voting contract example](https://github.com/algorandfoundation/puya/tree/main/examples/voting/voting.py) for a real-world example that uses box storage.
## Scratch storage
[Section titled “Scratch storage”](#scratch-storage)
To use scratch storage you need to [register the scratch storage that you want to use](/algokit/languages/python/lg-structure/#contract-class-configuration) and then you can use the scratch storage [AVM ops](/algokit/languages/python/lg-ops/).
For example:
```python
from algopy import Bytes, Contract, UInt64, op, urange
TWO = 2
TWENTY = 20
class MyContract(Contract, scratch_slots=(1, TWO, urange(3, TWENTY))):
def approval_program(self) -> bool:
op.Scratch.store(1, UInt64(5))
op.Scratch.store(2, Bytes(b"Hello World"))
for i in urange(3, 20):
op.Scratch.store(i, i)
assert op.Scratch.load_uint64(1) == UInt64(5)
assert op.Scratch.load_bytes(2) == b"Hello World"
assert op.Scratch.load_uint64(5) == UInt64(5)
return True
def clear_state_program(self) -> bool:
return True
```
# Program structure
An Algorand Python smart contract is defined within a single class. You can extend other contracts (through inheritance), and also define standalone functions and reference them. This also works across different Python packages - in other words, you can have a Python library with common functions and re-use that library across multiple projects!
## Modules
[Section titled “Modules”](#modules)
Algorand Python modules are files that end in `.py`, as with standard Python. Sub-modules are supported as well, so you’re free to organise your Algorand Python code however you see fit. The standard python import rules apply, including [relative vs absolute import](https://docs.python.org/3/reference/import.html#package-relative-imports) requirements.
A given module can contain zero, one, or many smart contracts and/or logic signatures.
A module can contain [contracts](), [subroutines](), [logic signatures](), and [compile-time constant code and values](/algokit/languages/python/lg-modules/).
## Typing
[Section titled “Typing”](#typing)
Algorand Python code must be fully typed with [type annotations](https://docs.python.org/3/library/typing.html).
In practice, this mostly means annotating the arguments and return types of all functions.
## Subroutines
[Section titled “Subroutines”](#subroutines)
Subroutines are “internal” or “private” methods to a contract. They can exist as part of a contract class, or at the module level so they can be used by multiple classes or even across multiple projects.
You can pass parameters to subroutines and define local variables, both of which automatically get managed for you with semantics that match Python semantics.
All subroutines must be decorated with `algopy.subroutine`, like so:
```python
def foo() -> None: # compiler error: not decorated with subroutine
...
@algopy.subroutine
def bar() -> None:
...
```
#### NOTE
[Section titled “NOTE”](#note)
Requiring this decorator serves two key purposes:
1. You get an understandable error message if you try and use a third party package that wasn’t built for Algorand Python
2. It provides for the ability to modify the functions on the fly when running in Python itself, in a future testing framework.
Argument and return types to a subroutine can be any Algorand Python variable type (except for\
\\\ [some inner transaction types](/algokit/languages/python/lg-transactions/#inner-transaction-objects-cannot-be-passed-to-or-returned-from-subroutines) ).
Returning multiple values is allowed, this is annotated in the standard Python way with `tuple`:
```python
@algopy.subroutine
def return_two_things() -> tuple[algopy.UInt64, algopy.String]:
...
```
Keyword only and positional only argument list modifiers are supported:
```python
@algopy.subroutine
def my_method(a: algopy.UInt64, /, b: algopy.UInt64, *, c: algopy.UInt64) -> None:
...
```
In this example, `a` can only be passed positionally, `b` can be passed either by position or by name, and `c` can only be passed by name.
The following argument/return types are not currently supported:
* Type unions
* Variadic args like `*args`, `**kwargs`
* Python types such as `int`
* Default values are not supported
## Contract classes
[Section titled “Contract classes”](#contract-classes)
An [Algorand smart contract](https://dev.algorand.co/concepts/smart-contracts/apps/) consists of two distinct “programs”; an approval program, and a clear-state program. These are tied together in Algorand Python as a single class.
All contracts must inherit from the base class `algopy.Contract` - either directly or indirectly, which can include inheriting from `algopy.ARC4Contract`.
The life-cycle of a smart contract matches the semantics of Python classes when you consider deploying a smart contract as “instantiating” the class. Any calls to that smart contract are made to that instance of the smart contract, and any state assigned to `self.` will persist across different invocations (provided the transaction it was a part of succeeds, of course). You can deploy the same contract class multiple times, each will become a distinct and isolated instance.
Contract classes can optionally implement an `__init__` method, which will be executed exactly once, on first deployment. This method takes no arguments, but can contain arbitrary code, including reading directly from the transaction arguments via [`Txn`](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.Txn). This makes it a good place to put common initialisation code, particularly in ARC-4 contracts with multiple methods that allow for creation.
The contract class body should not contain any logic or variable initialisations, only method definitions. Forward type declarations are allowed.
Example:
```python
class MyContract(algopy.Contract):
foo: algopy.UInt64 # okay
bar = algopy.UInt64(1) # not allowed
if True: # also not allowed
bar = algopy.UInt64(2)
```
Only concrete (ie non-abstract) classes produce output artifacts for deployment. To mark a class as explicitly abstract, inherit from [`abc.ABC`](https://docs.python.org/3/library/abc.html#abc.ABC).
#### NOTE
[Section titled “NOTE”](#note-1)
The compiler will produce a warning if a Contract class is implicitly abstract, i.e. if any abstract methods are unimplemented.
### Contract class configuration
[Section titled “Contract class configuration”](#contract-class-configuration)
When defining a contract subclass you can pass configuration options to the `algopy.Contract` base class [per the API documentation](/docs/algorand-python/python/latest/api/api-algopy#algopy.Contract).
Namely you can pass in:
* `name` - Which will affect the output TEAL file name if there are multiple non-abstract contracts in the same file and will also be used as the contract name in the ARC-32/ARC-56 application.json instead of the class name.
* `scratch_slots` - Which allows you to mark a slot ID or range of slot IDs as “off limits” to Puya so you can manually use them.
* `state_totals` - Which allows defining what values should be used for global and local uint and bytes storage values when creating a contract and will appear in ARC-32/ARC-56 app spec.
Full example:
```python
GLOBAL_UINTS = 3
class MyContract(
algopy.Contract,
name="CustomName",
scratch_slots=[5, 25, algopy.urange(110, 115)],
state_totals=algopy.StateTotals(local_bytes=1, local_uints=2, global_bytes=4, global_uints=GLOBAL_UINTS),
):
...
```
### Example: Simplest possible `algopy.Contract` implementation
[Section titled “Example: Simplest possible algopy.Contract implementation”](#example-simplest-possible-algopycontract-implementation)
For a non-ARC-4 contract, the contract class must implement an `approval_program` and a `clear_state_program` method.
As an example, this is a valid contract that always approves:
```python
class Contract(algopy.Contract):
def approval_program(self) -> bool:
return True
def clear_state_program(self) -> bool:
return True
```
The return value of these methods can be either a `bool` that indicates whether the transaction should approve or not, or a `algopy.UInt64` value, where `UInt64(0)` indicates that the transaction should be rejected and any other value indicates that it should be approved.
### Example: Simple call counter
[Section titled “Example: Simple call counter”](#example-simple-call-counter)
Here is a very simple example contract that maintains a counter of how many times it has been called (including on create).
```python
class Counter(algopy.Contract):
def __init__(self) -> None:
self.counter = algopy.UInt64(0)
def approval_program(self) -> bool:
match algopy.Txn.on_completion:
case algopy.OnCompleteAction.NoOp:
self.increment_counter()
return True
case _:
# reject all OnCompletionAction's other than NoOp
return False
def clear_state_program(self) -> bool:
return True
@algopy.subroutine
def increment_counter(self) -> None:
self.counter += 1
```
Some things to note:
* `self.counter` will be stored in the application’s [Global State](/algokit/languages/python/lg-storage/#global-state).
* The return type of `__init__` must be `None`, per standard typed Python.
* Any methods other than `__init__`, `approval_program` or `clear_state_program` must be decorated with `@subroutine`.
### Example: Simplest possible `algopy.ARC4Contract` implementation
[Section titled “Example: Simplest possible algopy.ARC4Contract implementation”](#example-simplest-possible-algopyarc4contract-implementation)
And here is a valid ARC-4 contract:
```python
class ABIContract(algopy.ARC4Contract):
pass
```
A default `@algopy.arc4.baremethod` that allows contract creation is automatically inserted if no other public method allows execution on create.
The approval program is always automatically generated, and consists of a router which delegates based on the transaction application args to the correct public method.
A default `clear_state_program` is implemented which always approves, but this can be overridden.
### Example: An ARC-4 call counter
[Section titled “Example: An ARC-4 call counter”](#example-an-arc-4-call-counter)
```python
import algopy
class ARC4Counter(algopy.ARC4Contract):
def __init__(self) -> None:
self.counter = algopy.UInt64(0)
@algopy.arc4.abimethod(create="allow")
def invoke(self) -> algopy.arc4.UInt64:
self.increment_counter()
return algopy.arc4.UInt64(self.counter)
@algopy.subroutine
def increment_counter(self) -> None:
self.counter += 1
```
This functions very similarly to the [simple example]().
Things to note here:
* Since the `invoke` method has `create="allow"`, it can be called both as the method to create the app and also to invoke it after creation. This also means that no default bare-method create will be generated, so the only way to create the contract is through this method.
* The default options for `abimethod` is to only allow `NoOp` as an on-completion-action, so we don’t need to check this manually.
* The current call count is returned from the `invoke` method.
* Every method in an `ARC4Contract` except for the optional `__init__` and `clear_state_program` methods must be decorated with one of `algopy.arc4.abimethod`, `algopy.arc4.baremethod`, or `algopy.subroutine`. `subroutines` won’t be directly callable through the default router.
See the [ARC-4 section](/algokit/languages/python/lg-arc4/) of this language guide for more info on the above.
## Logic signatures
[Section titled “Logic signatures”](#logic-signatures)
[Logic signatures on Algorand](https://dev.algorand.co/concepts/smart-contracts/logic-sigs/) are stateless, and consist of a single program. As such, they are implemented as functions in Algorand Python rather than classes.
```python
@algopy.logicsig
def my_log_sig() -> bool:
...
```
Similar to `approval_program` or `clear_state_program` methods, the function must take no arguments, and return either `bool` or `algopy.UInt64`. The meaning is the same: a `True` value or non-zero `UInt64` value indicates success, `False` or `UInt64(0)` indicates failure.
Logic signatures can make use of subroutines that are not nested in contract classes.
# Transactions
Algorand Python provides types for accessing fields of other transactions in a group, as well as creating and submitting inner transactions from your smart contract.
The following types are available:
| Group Transactions | Inner Transaction Field sets | Inner Transaction |
| ---------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| [PaymentTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.PaymentTransaction) | [Payment](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.Payment) | [PaymentInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.PaymentInnerTransaction) |
| [KeyRegistrationTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.KeyRegistrationTransaction) | [KeyRegistration](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.KeyRegistration) | [KeyRegistrationInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.KeyRegistrationInnerTransaction) |
| [AssetConfigTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.AssetConfigTransaction) | [AssetConfig](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetConfig) | [AssetConfigInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetConfigInnerTransaction) |
| [AssetTransferTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.AssetTransferTransaction) | [AssetTransfer](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetTransfer) | [AssetTransferInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetTransferInnerTransaction) |
| [AssetFreezeTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.AssetFreezeTransaction) | [AssetFreeze](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetFreeze) | [AssetFreezeInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.AssetFreezeInnerTransaction) |
| [ApplicationCallTransaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.ApplicationCallTransaction) | [ApplicationCall](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.ApplicationCall) | [ApplicationCallInnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.ApplicationCallInnerTransaction) |
| [Transaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.Transaction) | [InnerTransaction](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.InnerTransaction) | [InnerTransactionResult](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.InnerTransactionResult) |
## Group Transactions
[Section titled “Group Transactions”](#group-transactions)
Group transactions can be used as ARC-4 parameters or instantiated from a group index.
### ARC-4 parameter
[Section titled “ARC-4 parameter”](#arc-4-parameter)
Group transactions can be used as parameters in ARC-4 method
For example to require a payment transaction in an ARC-4 ABI method:
```python
import algopy
class MyContract(algopy.ARC4Contract):
@algopy.arc4.abimethod()
def process_payment(self, payment: algopy.gtxn.PaymentTransaction) -> None:
...
```
### Group Index
[Section titled “Group Index”](#group-index)
Group transactions can also be created using the group index of the transaction. If instantiating one of the type specific transactions they will be checked to ensure the transaction is of the expected type. [Transaction](/docs/algorand-python/python/latest/api/api-algopy.gtxn#algopy.gtxn.Transaction) is not checked for a specific type and provides access to all transaction fields
For example, to obtain a reference to a payment transaction:
```python
import algopy
@algopy.subroutine()
def process_payment(group_index: algopy.UInt64) -> None:
pay_txn = algopy.gtxn.PaymentTransaction(group_index)
...
```
## Inner Transactions
[Section titled “Inner Transactions”](#inner-transactions)
Inner transactions are defined using the parameter types, and can then be submitted individually by calling the `.submit()` method, or as a group by calling [`submit_txns`](/docs/algorand-python/python/latest/api/api-algopy.itxn#algopy.itxn.submit_txns)
### Examples
[Section titled “Examples”](#examples)
#### Create and submit an inner transaction
[Section titled “Create and submit an inner transaction”](#create-and-submit-an-inner-transaction)
```python
from algopy import Account, UInt64, itxn, subroutine
@subroutine
def example(amount: UInt64, receiver: Account) -> None:
itxn.Payment(
amount=amount,
receiver=receiver,
fee=0,
).submit()
```
#### Accessing result of a submitted inner transaction
[Section titled “Accessing result of a submitted inner transaction”](#accessing-result-of-a-submitted-inner-transaction)
```python
from algopy import Asset, itxn, subroutine
@subroutine
def example() -> Asset:
asset_txn = itxn.AssetConfig(
asset_name=b"Puya",
unit_name=b"PYA",
total=1000,
decimals=3,
fee=0,
).submit()
return asset_txn.created_asset
```
#### Submitting multiple transactions
[Section titled “Submitting multiple transactions”](#submitting-multiple-transactions)
```python
from algopy import Asset, Bytes, itxn, log, subroutine
@subroutine
def example() -> tuple[Asset, Bytes]:
asset1_params = itxn.AssetConfig(
asset_name=b"Puya",
unit_name=b"PYA",
total=1000,
decimals=3,
fee=0,
)
app_params = itxn.ApplicationCall(
app_id=1234,
app_args=(Bytes(b"arg1"), Bytes(b"arg1"))
)
asset1_txn, app_txn = itxn.submit_txns(asset1_params, app_params)
# log some details
log(app_txn.logs(0))
log(asset1_txn.txn_id)
log(app_txn.txn_id)
return asset1_txn.created_asset, app_txn.logs(1)
```
#### Create an ARC-4 application, and then call it
[Section titled “Create an ARC-4 application, and then call it”](#create-an-arc-4-application-and-then-call-it)
```python
from algopy import Bytes, arc4, itxn, subroutine
HELLO_WORLD_APPROVAL: bytes = ...
HELLO_WORLD_CLEAR: bytes = ...
@subroutine
def example() -> None:
# create an application
application_txn = itxn.ApplicationCall(
approval_program=HELLO_WORLD_APPROVAL,
clear_state_program=HELLO_WORLD_CLEAR,
fee=0,
).submit()
app = application_txn.created_app
# invoke an ABI method
call_txn = itxn.ApplicationCall(
app_id=app,
app_args=(arc4.arc4_signature("hello(string)string"), arc4.String("World")),
fee=0,
).submit()
# extract result
hello_world_result = arc4.String.from_log(call_txn.last_log)
```
#### Create and submit transactions in a loop
[Section titled “Create and submit transactions in a loop”](#create-and-submit-transactions-in-a-loop)
```python
from algopy import Account, UInt64, itxn, subroutine
@subroutine
def example(receivers: tuple[Account, Account, Account]) -> None:
for receiver in receivers:
itxn.Payment(
amount=UInt64(1_000_000),
receiver=receiver,
fee=0,
).submit()
```
### Limitations
[Section titled “Limitations”](#limitations)
Inner transactions are powerful, but currently do have some restrictions in how they are used.
#### Inner transaction objects cannot be passed to or returned from subroutines
[Section titled “Inner transaction objects cannot be passed to or returned from subroutines”](#inner-transaction-objects-cannot-be-passed-to-or-returned-from-subroutines)
```python
from algopy import Application, Bytes, itxn, subroutine
@subroutine
def parameter_not_allowed(txn: itxn.PaymentInnerTransaction) -> None:
# this is a compile error
...
@subroutine
def return_not_allowed() -> itxn.PaymentInnerTransaction:
# this is a compile error
...
@subroutine
def passing_fields_allowed() -> Application:
txn = itxn.ApplicationCall(...).submit()
do_something(txn.txn_id, txn.logs(0)) # this is ok
return txn.created_app # and this is ok
@subroutine
def do_something(txn_id: Bytes): # this is just a regular subroutine
...
```
#### Inner transaction parameters cannot be reassigned without a `.copy()`
[Section titled “Inner transaction parameters cannot be reassigned without a .copy()”](#inner-transaction-parameters-cannot-be-reassigned-without-a-copy)
```python
from algopy import itxn, subroutine
@subroutine
def example() -> None:
payment = itxn.Payment(...)
reassigned_payment = payment # this is an error
copied_payment = payment.copy() # this is ok
```
#### Inner transactions cannot be reassigned
[Section titled “Inner transactions cannot be reassigned”](#inner-transactions-cannot-be-reassigned)
```python
from algopy import itxn, subroutine
@subroutine
def example() -> None:
payment_txn = itxn.Payment(...).submit()
reassigned_payment_txn = payment_txn # this is an error
txn_id = payment_txn.txn_id # this is ok
```
#### Inner transactions methods cannot be called if there is a subsequent inner transaction submitted or another subroutine is called
[Section titled “Inner transactions methods cannot be called if there is a subsequent inner transaction submitted or another subroutine is called”](#inner-transactions-methods-cannot-be-called-if-there-is-a-subsequent-inner-transaction-submitted-or-another-subroutine-is-called)
```python
from algopy import itxn, subroutine
@subroutine
def example() -> None:
app_1 = itxn.ApplicationCall(...).submit()
log_from_call1 = app_1.logs(0) # this is ok
# another inner transaction is submitted
itxn.ApplicationCall(...).submit()
# or another subroutine is called
call_some_other_subroutine()
app1_txn_id = app_1.txn_id # this is ok, properties are still available
another_log_from_call1 = app_1.logs(1) # this is not allowed as the array results may no longer be available, instead assign to a variable before submitting another transaction
```
# Types
Algorand Python exposes a number of types that provide a statically typed representation of the behaviour that is possible on the Algorand Virtual Machine.
> ##### Types
>
> [Section titled “Types”](#types)
>
> * [AVM types](#avm-types)
>
> * [UInt64](#uint64)
> * [Bytes](#bytes)
> * [String](#string)
> * [BigUInt](#biguint)
> * [bool](#bool)
> * [Account](#account)
> * [Asset](#asset)
> * [Application](#application)
>
> * [Python built-in types](#python-built-in-types)
>
> * [bool](#id2)
> * [tuple](#tuple)
> * [typing.NamedTuple](#typing-namedtuple)
> * [None](#none)
> * [int, str, bytes, float](#int-str-bytes-float)
>
> * [Template variables](#template-variables)
>
> * [ARC-4 types](#arc-4-types)
>
> * [Type Validation](#type-validation)
>
> * [Validated Sources of Values](#validated-sources-of-values)
> * [Non-Validated Sources](#non-validated-sources)
## AVM types
[Section titled “AVM types”](#avm-types)
The most basic [types on the AVM](https://dev.algorand.co/concepts/smart-contracts/avm/#stack-types) are `uint64` and `bytes[]`, representing unsigned 64-bit integers and byte arrays respectively. These are represented by [`UInt64`]() and [`Bytes`]() in Algorand Python.
There are further “bounded” types supported by the AVM, which are backed by these two simple primitives. For example, `bigint` represents a variably sized (up to 512-bits), unsigned integer, but is actually backed by a `bytes[]`. This is represented by [`BigUInt`]() in Algorand Python.
### UInt64
[Section titled “UInt64”](#uint64)
[`algopy.UInt64`](/docs/algorand-python/python/latest/api/api-algopy#algopy.UInt64) represents the underlying AVM `uint64` type.
It supports all the same operators as `int`, except for `/`, you must use `//` for truncating division instead.
```python
# you can instantiate with an integer literal
num = algopy.UInt64(1)
# no arguments default to the zero value
zero = algopy.UInt64()
# zero is False, any other value is True
assert not zero
assert num
# Like Python's `int`, `UInt64` is immutable, so augmented assignment operators return new values
one = num
num += 1
assert one == 1
assert num == 2
# note that once you have a variable of type UInt64, you don't need to type any variables
# derived from that or wrap int literals
num2 = num + 200 // 3
```
[Further examples available here](https://github.com/algorandfoundation/puya/blob/main/test_cases/stubs/uint64.py).
### Bytes
[Section titled “Bytes”](#bytes)
[`algopy.Bytes`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Bytes) represents the underlying AVM `bytes[]` type. It is intended to represent binary data, for UTF-8 it might be preferable to use [String]().
```python
# you can instantiate with a bytes literal
data = algopy.Bytes(b"abc")
# no arguments defaults to an empty value
empty = algopy.Bytes()
# empty is False, non-empty is True
assert data
assert not empty
# Like Python's `bytes`, `Bytes` is immutable, augmented assignment operators return new values
abc = data
data += b"def"
assert abc == b"abc"
assert data == b"abcdef"
# indexing and slicing are supported, and both return a Bytes
assert abc[0] == b"a"
assert data[:3] == abc
# check if a bytes sequence occurs within another
assert abc in data
```
#### HINT
[Section titled “HINT”](#hint)
Indexing a `Bytes` returning a `Bytes` differs from the behaviour of Python’s bytes type, which returns an `int`.
```python
# you can iterate
for i in abc:
...
# construct from encoded values
base32_seq = algopy.Bytes.from_base32('74======')
base64_seq = algopy.Bytes.from_base64('RkY=')
hex_seq = algopy.Bytes.from_hex('FF')
# binary manipulations ^, &, |, and ~ are supported
data ^= ~((base32_seq & base64_seq) | hex_seq)
# access the length via the .length property
assert abc.length == 3
```
#### NOTE
[Section titled “NOTE”](#note)
See [Python builtins](/algokit/languages/python/lg-builtins/#len---length) for an explanation of why `len()` isn’t supported.
[See a full example](https://github.com/algorandfoundation/puya/blob/main/test_cases/stubs/bytes.py).
### String
[Section titled “String”](#string)
[`String`](/docs/algorand-python/python/latest/api/api-algopy#algopy.String) is a special Algorand Python type that represents a UTF-8 encoded string. It’s backed by `Bytes`, which can be accessed through the `.bytes` property.
It works similarly to `Bytes`, except that it works with `str` literals rather than `bytes` literals. Additionally, due to a lack of AVM support for unicode data, indexing and length operations are not currently supported (simply getting the length of a UTF-8 string is an `O(N)` operation, which would be quite costly in a smart contract). If you are happy using the length as the number of bytes, then you can call `.bytes.length`.
```python
# you can instantiate with a string literal
data = algopy.String("abc")
# no arguments defaults to an empty value
empty = algopy.String()
# empty is False, non-empty is True
assert data
assert not empty
# Like Python's `str`, `String` is immutable, augmented assignment operators return new values
abc = data
data += "def"
assert abc == "abc"
assert data == "abcdef"
# whilst indexing and slicing are not supported, the following tests are:
assert abc.startswith("ab")
assert abc.endswith("bc")
assert abc in data
# you can also join multiple Strings together with a separator:
assert algopy.String(", ").join((abc, abc)) == "abc, abc"
# access the underlying bytes
assert abc.bytes == b"abc"
```
[See a full example](https://github.com/algorandfoundation/puya/blob/main/test_cases/stubs/string.py).
### BigUInt
[Section titled “BigUInt”](#biguint)
[`algopy.BigUInt`](/docs/algorand-python/python/latest/api/api-algopy#algopy.BigUInt) represents a variable length (max 512-bit) unsigned integer stored as `bytes[]` in the AVM.
It supports all the same operators as `int`, except for power (`**`), left and right shift (`<<` and `>>`) and `/` (as with `UInt64`, you must use `//` for truncating division instead).
Note that the op code costs for `bigint` math are an order of magnitude higher than those for `uint64` math. If you just need to handle overflow, take a look at the wide ops such as `addw`, `mulw`, etc - all of which are exposed through the [`algopy.op`](/docs/algorand-python/python/latest/api/api-algopy.op#module-algopy.op) module.
Another contrast between `bigint` and `uint64` math is that `bigint` math ops don’t immediately error on overflow - if the result exceeds 512-bits, then you can still access the value via `.bytes`, but any further math operations will fail.
```python
# you can instantiate with an integer literal
num = algopy.BigUInt(1)
# no arguments default to the zero value
zero = algopy.BigUInt()
# zero is False, any other value is True
assert not zero
assert num
# Like Python's `int`, `BigUInt` is immutable, so augmented assignment operators return new values
one = num
num += 1
assert one == 1
assert num == UInt64(2)
# note that once you have a variable of type BigUInt, you don't need to type any variables
# derived from that or wrap int literals
num2 = num + 200 // 3
```
[Further examples available here](https://github.com/algorandfoundation/puya/blob/main/test_cases/stubs/biguint.py).
### bool
[Section titled “bool”](#bool)
The semantics of the AVM `bool` bounded type exactly match the semantics of Python’s built-in `bool` type and thus Algorand Python uses the in-built `bool` type from Python.
Per the behaviour in normal Python, Algorand Python automatically converts various types to `bool` when they appear in statements that expect a `bool` e.g. `if`/`while`/`assert` statements, appear in Boolean expressions (e.g. next to `and` or `or` keywords) or are explicitly casted to a bool.
The semantics of `not`, `and` and `or` are special [per how these keywords work in Python](https://docs.python.org/3/reference/expressions.html#boolean-operations) (e.g. short circuiting).
```python
a = UInt64(1)
b = UInt64(2)
c = a or b
d = b and a
e = self.expensive_op(UInt64(0)) or self.side_effecting_op(UInt64(1))
f = self.expensive_op(UInt64(3)) or self.side_effecting_op(UInt64(42))
g = self.side_effecting_op(UInt64(0)) and self.expensive_op(UInt64(42))
h = self.side_effecting_op(UInt64(2)) and self.expensive_op(UInt64(3))
i = a if b < c else d + e
if a:
log("a is True")
```
[Further examples available here](https://github.com/algorandfoundation/puya/blob/main/test_cases/stubs/uint64.py).
### Account
[Section titled “Account”](#account)
[`Account`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Account) represents a logical Account, backed by a `bytes[32]` representing the bytes of the public key (without the checksum). It has various account related methods that can be called from the type.
Also see [`algopy.arc4.Address`](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.Address) if needing to represent the address as a distinct type.
### Asset
[Section titled “Asset”](#asset)
[`Asset`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Asset) represents a logical Asset, backed by a `uint64` ID. It has various asset related methods that can be called from the type.
### Application
[Section titled “Application”](#application)
[`Application`](/docs/algorand-python/python/latest/api/api-algopy#algopy.Application) represents a logical Application, backed by a `uint64` ID. It has various application related methods that can be called from the type.
## Python built-in types
[Section titled “Python built-in types”](#python-built-in-types)
Unfortunately, the [AVM types]() don’t map to standard Python primitives. For instance, in Python, an `int` is unsigned, and effectively unbounded. A `bytes` similarly is limited only by the memory available, whereas an AVM `bytes[]` has a maximum length of 4096. In order to both maintain semantic compatibility and allow for a framework implementation in plain Python that will fail under the same conditions as when deployed to the AVM, support for Python primitives is limited.
In saying that, there are many places where built-in Python types can be used and over time the places these types can be used are expected to increase.
### bool
[Section titled “bool”](#bool-1)
[Per above]() Algorand Python has full support for `bool`.
### tuple
[Section titled “tuple”](#tuple)
Python tuples are supported as arguments to subroutines, local variables, return types.
### typing.NamedTuple
[Section titled “typing.NamedTuple”](#typingnamedtuple)
Python named tuples are also supported using [`typing.NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple).
#### NOTE
[Section titled “NOTE”](#note-1)
Default field values and subclassing a NamedTuple are not supported
```python
import typing
import algopy
class Pair(typing.NamedTuple):
foo: algopy.Bytes
bar: algopy.Bytes
```
### None
[Section titled “None”](#none)
`None` is not supported as a value, but is supported as a type annotation to indicate a function or subroutine returns no value.
### int, str, bytes, float
[Section titled “int, str, bytes, float”](#int-str-bytes-float)
The `int`, `str` and `bytes` built-in types are currently only supported as [module-level constants](/algokit/languages/python/lg-modules/) or literals.
They can be passed as arguments to various Algorand Python methods that support them or when interacting with certain [AVM types]() e.g. adding a number to a `UInt64`.
`float` is not supported.
## Template variables
[Section titled “Template variables”](#template-variables)
Template variables can be used to represent a placeholder for a deploy-time provided value. This can be declared using the `TemplateVar[TYPE]` type where `TYPE` is the Algorand Python type that it will be interpreted as.
```python
from algopy import BigUInt, Bytes, TemplateVar, UInt64, arc4
from algopy.arc4 import UInt512
class TemplateVariablesContract(arc4.ARC4Contract):
@arc4.abimethod()
def get_bytes(self) -> Bytes:
return TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES")
@arc4.abimethod()
def get_big_uint(self) -> UInt512:
x = TemplateVar[BigUInt](docs/_build/markdown/"SOME_BIG_UINT")
return UInt512(x)
@arc4.baremethod(allow_actions=["UpdateApplication"])
def on_update(self) -> None:
assert TemplateVar[bool](docs/_build/markdown/"UPDATABLE")
@arc4.baremethod(allow_actions=["DeleteApplication"])
def on_delete(self) -> None:
assert TemplateVar[UInt64](docs/_build/markdown/"DELETABLE")
```
The resulting TEAL code that PuyaPy emits has placeholders with `TMPL_{template variable name}` that expects either an integer value or an encoded bytes value. This behaviour exactly matches what [AlgoKit Utils expects](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/docs/capabilities/app-deploy.md#compilation-and-template-substitution).
For more information look at the API reference for [`TemplateVar`](/docs/algorand-python/python/latest/api/api-algopy#algopy.TemplateVar).
## ARC-4 types
[Section titled “ARC-4 types”](#arc-4-types)
ARC-4 data types are a first class concept in Algorand Python. They can be passed into ARC-4 methods (which will translate to the relevant ARC-4 method signature), passed into subroutines, or instantiated into local variables. A limited set of operations are exposed on some ARC-4 types, but often it may make sense to convert the ARC-4 value to a native AVM type, in which case you can use the `native` property to retrieve the value. Most of the ARC-4 types also allow for mutation e.g. you can edit values in arrays by index.
Please see the [reference documentation](/docs/algorand-python/python/latest/api/api-algopy.arc4) for the different classes that can be used to represent ARC-4 values or the [ARC-4 documentation](/algokit/languages/python/lg-arc4/) for more information about ARC-4.
## Type Validation
[Section titled “Type Validation”](#type-validation)
Most high-order types (i.e. not `Uint64` or `Bytes`) supported by Algorand TypeScript exist as a single byte array value with a specific encoding. When reading one of these values from an untrusted source it is important to validate the encoding of this value before using it. For example when expecting a `Account` one should validate that there are exactly 32 bytes in the underlying value.
PuyaPy automatically validates some value sources for you, whilst leaving others to be explicitly validated by the developer. You should always validate untrusted sources (such as ABI args from untrusted clients) but may wish to omit validation for performance/efficiency reasons from trusted sources (such as a Global state value only your application accesses).
For more detailed information on the impacts of type validation refer to [this section](https://dev.algorand.co/concepts/smart-contracts/abi/#validating-abi-values) in the developer portal.
### Validated Sources of Values
[Section titled “Validated Sources of Values”](#validated-sources-of-values)
The following sources of ABI values are always validated by the compiler by default.
* ABI method arguments (when called externally)
* ABI return values
* Bytes.to\_fixed (with the `assert-length` strategy)
**NOTE**: Argument validation can be disabled globally via the `--validate-abi-args` flags. Similarly, return value validation can be disable via the `--validate-abi-return` flag. It is also possible for a method implementation to disable validation for its own arguments via the `validate_encoding` option on the `abimethod` decorator. Per-method argument validation settings override the global compiler settings. If one wishes to disable the return validation, you can parse the return value directly from the inner transaction’s last log and use an unsafe method (`.from_bytes`) for converting the bytes to the desired ABI type.
### Non-Validated Sources
[Section titled “Non-Validated Sources”](#non-validated-sources)
There are certain places where one can get an ABI value that is not fully validated:
* Global state
* Local state
* Boxes
* Subroutine arguments
* Subroutine return values
* `from_bytes` methods on ABI types
There are no automatic validation steps taken for these values because it is assumed that the value was validated before reaching this point by the compiler.
For example, if a method takes an ABI value as an argument and stores it in a box, the value is validated when taken as input from the method arguments but not when placed in the box. By default, all sources of ABI values other than what is listed above does have ABI validation, thus it would be inefficient to perform validation again every time the value is used.
It should be noted, however, that all the validation methods the Puya compiler does automatically can be disabled on a per-method basis. This means it is theoretically possible for an incorrectly encoded value to come from one of the listed sources, but it will always be clear in the source code that this is the case.
For example, given the following contract:
```py
class BoxReadWrite(ARC4Contract):
def __init__(self) -> None:
self.acct_box = Box(Account)
@abimethod()
def write_to_box(self, acct: Account) -> None:
self.acct_box.value = acct
@abimethod()
def read_from_box(self) -> Account:
return self.acct_box.value
```
One can be sure that the value in `acctBox` is always valid because the only source of the value is an ABI argument (`acct` in `writeToBox`). If validation was disabled, however, then one cannot trust that it is properly encoded and should perform a manual validation if required:
```py
class BoxReadWrite(ARC4Contract):
def __init__(self):
self.acct_box = Box(Account)
@abimethod(validate_encoding="unsafe_disabled")
def write_to_box(self, acct: Account) -> None:
acct.validate()
self.acct_box.value = acct
@abimethod()
def read_from_box(self) -> Account:
return self.acct_box.value
```
Similarly, if a the Account is constructed from bytes, a manual validation should be performed:
```py
def write_to_box(self, acct_bytes: Bytes) -> None:
acct = Account.from_bytes(acct_bytes)
acct.validate()
self.acct_box.value = acct
```
# Unsupported Python features
## raise, try/except/finally
[Section titled “raise, try/except/finally”](#raise-tryexceptfinally)
Exception raising and exception handling constructs are not supported.
Supporting user exceptions would be costly to implement in terms of op codes.
Furthermore, AVM errors and exceptions are not “catch-able”, they immediately terminate the program.
Therefore, there is very little to no benefit of supporting exceptions and exception handling.
The preferred method of raising an error that terminates is through the use of [assert statements](/algokit/languages/python/lg-errors/).
## with
[Section titled “with”](#with)
Context managers are redundant without exception handling support.
## async
[Section titled “async”](#async)
The AVM is not just single threaded, but all operations are effectively “blocking”, rendering asynchronous programming effectively useless.
## closures & lambdas
[Section titled “closures & lambdas”](#closures--lambdas)
Without the support of function pointers, or other methods of invoking an arbitrary function, it’s not possible to return a function as a closure.
Nested functions/lambdas as a means of repeating common operations within a given function may be supported in the future.
## global keyword
[Section titled “global keyword”](#global-keyword)
Module level values are only allowed to be [constants](/algokit/languages/python/lg-modules/#module-constants). No rebinding of module constants is allowed. It’s not clear what the meaning here would be, since there’s no real arbitrary means of storing state without associating it with a particular contract. If you do have need of such a thing, take a look at [gload\_bytes](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.gload_bytes) or [gload\_uint64](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.gload_uint64) if the contracts are within the same transaction, otherwise [AppGlobal.get\_ex\_bytes](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.AppGlobal.get_ex_bytes) and [AppGlobal.get\_ex\_uint64](/docs/algorand-python/python/latest/api/api-algopy.op#algopy.op.AppGlobal.get_ex_uint64).
## Inheritance (outside of contract classes)
[Section titled “Inheritance (outside of contract classes)”](#inheritance-outside-of-contract-classes)
Polymorphism is also impossible to support without function pointers, so data classes (such as [arc4.Struct](/docs/algorand-python/python/latest/api/api-algopy.arc4#algopy.arc4.Struct)) don’t currently allow for inheritance. Member functions there are not supported because we’re not sure yet whether it’s better to not have inheritance but allow functions on data classes, or to allow inheritance and disallow member functions.
Contract inheritance is a special case, since each concrete contract is compiled separately, true polymorphism isn’t required as all references can be resolved at compile time.
# Algorand Python
Algorand Python is a partial implementation of the Python programming language that runs on the AVM. It includes a statically typed framework for development of Algorand smart contracts and logic signatures, with Pythonic interfaces to underlying AVM functionality that works with standard Python tooling.
Algorand Python is compiled for execution on the AVM by PuyaPy, an optimising compiler that ensures the resulting AVM bytecode has execution semantics that match the given Python code. PuyaPy produces output that is directly compatible with [AlgoKit typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients) to make deployment and calling easy.
## Quick start
[Section titled “Quick start”](#quick-start)
The easiest way to use Algorand Python is to instantiate a template with AlgoKit via `algokit init -t python`. This will give you a full development environment with intellisense, linting, automatic formatting, breakpoint debugging, deployment and CI/CD.
Alternatively, if you want to start from scratch you can do the following:
1. Ensure you have Python 3.12+
2. Install [AlgoKit CLI](https://github.com/algorandfoundation/algokit-cli?tab=readme-ov-file#install)
3. Check you can run the compiler:
```shell
algokit compile py -h
```
4. Install Algorand Python into your project `poetry add algorand-python`
5. Create a contract in a (e.g.) `contract.py` file:
```python
from algopy import ARC4Contract, arc4
class HelloWorldContract(ARC4Contract):
@arc4.abimethod
def hello(self, name: arc4.String) -> arc4.String:
return "Hello, " + name
```
6. Compile the contract:
```shell
algokit compile py contract.py
```
7. You should now have `HelloWorldContract.approval.teal` and `HelloWorldContract.clear.teal` on the file system!
8. We generally recommend using ARC-56 and [generated typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients) to have the most optimal deployment and consumption experience; PuyaPy produces an ARC-56 compatible app spec file by default:
```shell
algokit compile py contract.py --no-output-teal
```
9. You should now have `HelloWorldContract.arc56.json`, which can be generated into a client e.g. using AlgoKit CLI:
```shell
algokit generate client HelloWorldContract.arc56.json --output client.py
```
10. From here you can dive into the [examples](https://github.com/algorandfoundation/puya/tree/main/examples) or look at the [documentation](https://algorandfoundation.github.io/puya/).
## Programming with Algorand Python
[Section titled “Programming with Algorand Python”](#programming-with-algorand-python)
To get started developing with Algorand Python, please take a look at the [Language Guide](/algokit/languages/python/language-guide/).
## Using the PuyaPy compiler
[Section titled “Using the PuyaPy compiler”](#using-the-puyapy-compiler)
To see detailed guidance for using the PuyaPy compiler, please take a look at the [Compiler guide](/algokit/languages/python/compiler/).
# Principles & Background
## Background
[Section titled “Background”](#background)
**Smart contracts** on the Algorand blockchain run on the Algorand Virtual Machine ([AVM](https://dev.algorand.co/concepts/smart-contracts/avm)). This is a stack based virtual machine, which executes AVM bytecode as part of an [Application Call transaction](https://dev.algorand.co/concepts/transactions/types/#application-call-transaction). The official mechanism for generating this bytecode is by submitting TEAL (Transaction Execution Approval Language) to an Algorand Node to compile.
**Smart signatures** have the same basis in the AVM and TEAL, but have a different execution model, one not involving Application Call transactions. Our focus will primarily be on smart contracts, since they are strictly more powerful in terms of available AVM functions.
TEAL is a [non-structured](https://en.wikipedia.org/wiki/Non-structured_programming) [imperative language](https://en.wikipedia.org/wiki/Procedural_programming#Imperative_programming) (albeit one with support for procedure calls that can isolate stack changes since v8 with `proto`). Writing TEAL is very similar to writing assembly code. It goes without saying that this is NOT a particularly common or well-practiced model for programming these days.
As it stands today, developers wanting to write smart contracts specifically for Algorand have the option of writing TEAL directly, or using some other mechanism of generating TEAL such as the officially supported [PyTEAL](https://pyteal.readthedocs.io/en/stable/) or the community supported [tealish](https://tealish.tinyman.org/en/latest/).
PyTEAL follows a [generative programming](https://en.wikipedia.org/wiki/Automatic_programming#Generative_programming) paradigm, which is a form of metaprogramming. Naturally, writing programs to generate programs presents an additional hurdle for developers looking to pick up smart contract development. Tooling support for this is also suboptimal, for example, many classes of errors resulting from the interaction between the procedural elements of the Python language and the PyTEAL expression-building framework go unnoticed until the point of TEAL generation, or worse go completely unnoticed, and even when PyTEAL can/does provide an error it can be difficult to understand.
Tealish provides a higher level procedural language, bearing a passing resemblance to Python, that compiles down to TEAL. However, it’s still lower level than most developers are used to. For example, the expression `1 + 2 + 3`is [not valid in tealish](https://tealish.tinyman.org/en/latest/language.html#math-logic). Another difference vs a higher level language such as Python is that [functions can only be declared after the program entry point logic](https://tealish.tinyman.org/en/latest/language.html#functions). In essence, tealish abstracts away many difficulties with writing plain TEAL, but it is still essentially more of a transpiler than a compiler. Furthermore, whilst appearing to have syntax inspired by Python, it both adds and removes many fundamental syntax elements, presenting an additional learning curve to developers looking to learn blockchain development on Algorand. Being a bespoke language also means it has a much smaller ecosystem of tooling built around it compared to languages like Python or JavaScript.
To most developers, the Python programming language needs no introduction. First released in 1991, its popularity has grown steadily over the decades, and as of June 2023 it is consistently ranked as either the most popular language, or second most popular following JavaScript:
* [GitHub 2022](https://octoverse.github.com/2022/top-programming-languages)
* [StackOverflow 2023](https://stackoverflow.blog/2023/06/13/developer-survey-results-are-in/)
* [Tiobe](https://www.tiobe.com/tiobe-index/)
* [PYPL](https://pypl.github.io/PYPL.html)
The AlgoKit project is an Algorand Foundation initiative to improve the developer experience on Algorand. Within this broad remit, two of the key [principles](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/algokit.md#guiding-principles) are to “meet developers where they are” and “leverage existing ecosystem”. Building a compiler that allows developers to write smart contracts using an idiomatic subset of a high level language such as Python would make great strides towards both of these goals.
Wyvern was the original internal code name for just such a compiler (now called Puya), one that will transform Python code into valid TEAL smart contracts. In line with the principle of meeting developers where they are, and recognising the popularity of JavaScript and TypeScript, a parallel initiative to build a TypeScript to TEAL compiler is [available here](https://github.com/algorandfoundation/puya-ts/).
## Principles
[Section titled “Principles”](#principles)
The principles listed here should form the basis of our decision-making, both in the design and implementation.
### Least surprise
[Section titled “Least surprise”](#least-surprise)
Our primary objective is to assist developers in creating accurate smart contracts right from the start. The often immutable nature of these contracts - although not always the case - and the substantial financial value they frequently safeguard, underlines the importance of this goal.
This principle ensures that the code behaves as anticipated by the developer. Specifically, if you’re a Python developer writing Python smart contract code, you can expect the code to behave identically to its execution in a standard Python environment.
Furthermore, we believe in promoting explicitness and correctness in contract code and its associated typing. This approach reduces potential errors and enhances the overall integrity of our smart contracts. Our commitment is to provide a user-friendly platform that aligns with the developer’s intuition and experience, ultimately simplifying their work and minimizing the potential for mistakes.
### Inherited from AlgoKit
[Section titled “Inherited from AlgoKit”](#inherited-from-algokit)
As a part of the AlgoKit project, the principles outlined [there](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/algokit.md#guiding-principles) also apply - to the extent that this project is just one component of AlgoKit.
#### “Leverage existing ecosystem”
[Section titled ““Leverage existing ecosystem””](#leverage-existing-ecosystem)
> AlgoKit functionality gets into the hands of Algorand developers quickly by building on top of the existing ecosystem wherever possible and aligned to these principles.
In order to leverage as much existing Python tooling as possible, we should strive to maintain the highest level of compatibility with the Python language (and the reference implementation: CPython).
#### “Meet developers where they are”
[Section titled ““Meet developers where they are””](#meet-developers-where-they-are)
> Make Blockchain development mainstream by giving all developers an idiomatic development experience in the operating system, IDE and language they are comfortable with so they can dive in quickly and have less they need to learn before being productive.
Python is a very idiomatic language. We should embrace accepted patterns and practices as much as possible, such as those listed in [PEP-20](https://peps.python.org/pep-0020/) (aka “The Zen of Python”).
#### “Extensible”
[Section titled ““Extensible””](#extensible)
> Be extensible for community contribution rather than stifling innovation, bottle-necking all changes through the Algorand Foundation and preventing the opportunity for other ecosystems being represented (e.g. Go, Rust, etc.). This helps make developers feel welcome and is part of the developer experience, plus it makes it easier to add features sustainably
One way to support this principle in the broader AlgoKit context is by building in a mechanism for reusing common code between smart contracts, to allow the community to build their own Python packages.
#### “Sustainable”
[Section titled ““Sustainable””](#sustainable)
> AlgoKit should be built in a flexible fashion with long-term maintenance in mind. Updates to latest patches in dependencies, Algorand protocol development updates, and community contributions and feedback will all feed in to the evolution of the software.
Taking this principle further, ensuring the compiler is well-designed (e.g. frontend backend separation, with a well-thought-out IR) will help with maintaining and improving the implementation over time. For example, adding in new TEAL language features will be easier, same for implementing new optimisation strategies.
Looking to the future, best practices for smart contract development are rapidly evolving. We shouldn’t tie the implementation too tightly to a current standard such as ARC-4 - although in that specific example, we would still aim for first class support, but it shouldn’t be assumed as the only way to write smart contracts.
#### “Modular components”
[Section titled ““Modular components””](#modular-components)
> Solution components should be modular and loosely coupled to facilitate efficient parallel development by small, effective teams, reduced architectural complexity and allowing developers to pick and choose the specific tools and capabilities they want to use based on their needs and what they are comfortable with.
We will focus on the language and compiler design itself.
An example of a very useful feature, that is strongly related but could be implemented separately instead, is the ability to run the users code in a unit-testing context, without compilation+deployment first. This would require implementing in Python some level of simulation of Algorand Nodes / AVM behaviour.
#### “Secure by default”
[Section titled ““Secure by default””](#secure-by-default)
> Include defaults, patterns and tooling that help developers write secure code and reduce the likelihood of security incidents in the Algorand ecosystem. This solution should help Algorand be the most secure Blockchain ecosystem.
Enforcing security (which is multi-faceted) at a compiler level is difficult, and is some cases impossible. The best application of this principle here is to support auditing, which is important and nuanced enough to be listed below as a separate principle.
#### “Cohesive developer tool suite” + “Seamless onramp”
[Section titled ““Cohesive developer tool suite” + “Seamless onramp””](#cohesive-developer-tool-suite--seamless-onramp)
> Cohesive developer tool suite: Using AlgoKit should feel professional and cohesive, like it was designed to work together, for the developer; not against them. Developers are guided towards delivering end-to-end, high quality outcomes on MainNet so they and Algorand are more likely to be successful. Seamless onramp: New developers have a seamless experience to get started and they are guided into a pit of success with best practices, supported by great training collateral; you should be able to go from nothing to debugging code in 5 minutes.
These principles relate more to AlgoKit as a whole, so we can respect them by considering the impacts of our decisions there more broadly.
### Abstraction without obfuscation
[Section titled “Abstraction without obfuscation”](#abstraction-without-obfuscation)
Algorand Python is a high level language, with support for things such as branching logic, operator precedence, etc., and not a set of “macros” for generating TEAL. As such, developers will not be able to directly influence specific TEAL output, if this is desirable a language such as [Tealish](https://tealish.tinyman.org) is more appropriate.
Whilst this will abstract away certain aspects of the underlying TEAL language, there are certain AVM concerns (such as op code budgets) that should not be abstracted away. That said, we should strive to generate code that is cost-effective and unsurprising. Python mechanisms such as dynamic (runtime) dispatch, and also many of its builtin functions on types such as `str` that are taken for granted, would require large amounts of ops compared to the Python code it represents.
### Support auditing
[Section titled “Support auditing”](#support-auditing)
Auditing is a critical part of the security process for deploying smart contracts. We want to support this function, and can do so in two ways:
1. By ensuring the same Python code as input generates identical output each time the compiler is run regardless of the system it’s running on. This is what might be termed [Output stability](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/articles/output_stability.md). Ensuring a consistent output regardless of the system it’s run on (assuming the same compiler version), means that auditing the lower level (ie TEAL) code is possible.
2. Although auditing the TEAL code should be possible, being able to easily identify and relate it back to the higher level code can make auditing the contract logic simpler and easier.
### Revolution, not evolution
[Section titled “Revolution, not evolution”](#revolution-not-evolution)
This is a new and groundbreaking way of developing for Algorand, and not a continuation of the PyTEAL/Beaker approach. By allowing developers to write procedural code, as opposed to constructing an expression tree, we can (among other things) significantly reduce the barrier to entry for developing smart contracts for the Algorand platform.
Since the programming paradigm will be fundamentally different, providing a smooth migration experience from PyTEAL to this new world is not an intended goal, and shouldn’t be a factor in our decisions. For example, it is not a goal of this project to produce a step-by-step “migrating from PyTEAL” document, as it is not a requirement for users to switch to this new paradigm in the short to medium term - support for PyTEAL should continue in parallel.
# Lora Overview
> Overview of Lora, a live on-chain resource analyzer for Algorand
Algorand AlgoKit lora is a live on-chain resource analyzer, that enables developers to explore and interact with a configured Algorand network in a visual way.
## What is Lora?
[Section titled “What is Lora?”](#what-is-lora)
AlgoKit lora is a powerful visual tool designed to streamline the Algorand local development experience. It acts as both a network explorer and a tool for building and testing your Algorand applications.
You can access lora by visiting in your browser or by running `algokit explore` when you have the [AlgoKit CLI](https://github.com/algorandfoundation/algokit-cli) installed.
## Key features
[Section titled “Key features”](#key-features)
* Explore blocks, transactions, transaction groups, assets, accounts and applications on LocalNet, TestNet or MainNet.
* Visualise and understand complex transactions and transaction groups with the visual transaction view.
* View blocks in real time as they are produced on the connected network.
* Monitor and inspect real-time transactions related to an asset, account, or application with the live transaction view.
* Review historical transactions related to an asset, account, or application through the historical transaction view.
* Access detailed asset information and metadata when the asset complies with one of the ASA ARCs.
* Connected to your Algorand wallet and perform context specific actions.
* Fund an account in LocalNet or TestNet.
* Visually deploy, populate, simulate and call an app by uploading an ARC-4, ARC-32 or ARC-56 app spec via App lab.
* Craft, simulate and send transaction groups using Transaction wizard.
* Seamless integration into the existing AlgoKit ecosystem.
## Why Did We Build Lora?
[Section titled “Why Did We Build Lora?”](#why-did-we-build-lora)
An explorer is an essential tool for making blockchain data accessible and enables users to inspect and understand on-chain activities. Without these tools, it’s difficult to interpret data or gather the information and insights to fully harness the potential of the blockchain. Therefore it makes sense to have a high quality, officially supported and fully open-source tool available to the community.
Before developing Lora, we evaluated the existing tools in the community, but none fully met our desires.
As part of this evaluation we came up with several design goals, which are:
* **Developer-Centric User Experience**: Offer a rich user experience tailored for developers, with support for LocalNet, TestNet, and MainNet.
* **Open Source**: Fully open source and actively maintained.
* **Operationally Simple**: Operate using algod and indexer directly, eliminating the need for additional setup, deployment, or maintenance.
* **Visualize Complexity**: Enable Algorand developers to understand complex transactions and transaction groups by visually representing them.
* **Contextual Linking**: Allow users to see live and historical transactions in the context of related accounts, assets, or applications.
* **Performant**: Ensure a fast and seamless experience by minimizing requests to upstream services and utilizing caching to prevent unnecessary data fetching. Whenever possible, ancillary data should be fetched just in time with minimal over-fetching.
* **Support the Learning Journey**: Assist developers in discovering and learning about the Algorand ecosystem.
* **Seamless Integration**: Use and integrate seamlessly with the existing AlgoKit tools and enhance their usefulness.
* **Local Installation**: Allow local installation alongside the AlgoKit CLI and your existing dev tools.
# AlgoKit Templates
> Overview of AlgoKit templates
AlgoKit offers a curated collection of production-ready and starter templates, streamlining front-end and smart contract development. These templates provide a comprehensive suite of pre-configured tools and integrations, from boilerplate React projects with Algorand wallet integration to smart contract projects for Python and TypeScript. This enables developers to prototype and deploy robust, production-ready applications rapidly.
By leveraging AlgoKit templates, developers can significantly reduce setup time, ensure best practices in testing, compiling, and deploying smart contracts, and focus on building innovative blockchain solutions with confidence.
This page provides an overview of the official AlgoKit templates and guidance on creating and sharing your custom templates to suit your needs better or contribute to the community.
## Official Templates
[Section titled “Official Templates”](#official-templates)
AlgoKit provides several official templates to cater to different development needs. These templates will create a [standalone AlgoKit project](/algokit/project-structure#standalone-projects).
* Smart Contract Templates:
* [Algorand Python](https://github.com/algorandfoundation/algokit-python-template)
* [Algorand TypeScript](https://github.com/algorand-devrel/tealscript-algokit-template)
* [DApp React Frontend](https://github.com/algorandfoundation/algokit-react-frontend-template)
* [Fullstack (Smart Contract & DApp Frontend template)](https://github.com/algorandfoundation/algokit-fullstack-template)
## How to initialize a template
[Section titled “How to initialize a template”](#how-to-initialize-a-template)
**To initialize using the `algokit` CLI**:
1. [Install AlgoKit](/getting-started/algokit-quick-start) and all the prerequisites mentioned in the installation guide.
2. Execute the command `algokit init`. This initiates an interactive wizard that assists in selecting the most appropriate template for your project requirements.
```shell
algokit init # This command will start an interactive wizard to select a template
```
**To initialize within GitHub Codespaces**:
1. Go to the [algokit-base-template](https://github.com/algorandfoundation/algokit-base-template) repository.
2. Initiate a new codespace by selecting the `Create codespace on main` option. You can find this by clicking the `Code` button and then navigating to the `Codespaces` tab.
3. Upon codespace preparation, `algokit` will automatically start `LocalNet` and present a prompt with the next steps. Executing `algokit init` will initiate the interactive wizard.
## Algorand Python Smart Contract Template
[Section titled “Algorand Python Smart Contract Template”](#algorand-python-smart-contract-template)
[Algorand Python Smart Contract Template Github Repo](https://github.com/algorandfoundation/algokit-python-template)
This template provides a production-ready baseline for developing and deploying [Python](https://github.com/algorandfoundation/puya) smart contracts.
To use it [install AlgoKit](https://github.com/algorandfoundation/algokit-cli#readme) and then either pass in `-t python` to `algokit init` or select the `python` template.
```shell
algokit init -t python
# or
algokit init # and select Smart Contracts & Python template
```
### Features
[Section titled “Features”](#features)
This template supports the following features:
* Compilation of multiple Algorand Python contracts to a predictable folder location and file layout where they can be deployed
* Deploy-time immutability and permanence control
* [Poetry](https://python-poetry.org/) for Python dependency management and virtual environment management
* Linting via [Ruff](https://github.com/charliermarsh/ruff) or [Flake8](https://flake8.pycqa.org/en/latest/)
* Formatting via [Black](https://github.com/psf/black)
* Type checking via [mypy](https://mypy-lang.org/)
* Testing via pytest (not yet used)
* Dependency vulnerability scanning via pip-audit (not yet used)
* VS Code configuration (linting, formatting, breakpoint debugging)
* dotenv (.env) file for configuration
* Automated testing of the compiled smart contracts
* [Output stability](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/articles/output_stability.md) tests of the TEAL output
* CI/CD pipeline using GitHub Actions:
* Optionally pick deployments to Netlify or Vercel
### Getting started
[Section titled “Getting started”](#getting-started)
Once the template is instantiated, you can follow the `README.md` file to see instructions on how to use the template.
* [Instructions for Starter template](https://github.com/algorandfoundation/algokit-python-template/blob/main/examples/starter_python/README.md)
* [Instructions for Production template](https://github.com/algorandfoundation/algokit-python-template/blob/main/examples/production_python/README.md)
## Algorand TypeScript Smart Contract Template
[Section titled “Algorand TypeScript Smart Contract Template”](#algorand-typescript-smart-contract-template)
[Algorand TypeScript Smart Contract Template Github Repo](https://github.com/algorand-devrel/tealscript-algokit-template)
This template provides a baseline TealScript smart contract development environment.
To use it [install AlgoKit](https://github.com/algorandfoundation/algokit-cli#readme) and then either pass in `-t tealscript` to `algokit init` or select the `TypeScript` language option interactively during `algokit init.`
```shell
algokit init -t tealscript
# or
algokit init # and select Smart Contracts & TypeScript template
```
### Getting started
[Section titled “Getting started”](#getting-started-1)
Once the template is instantiated, you can follow the [README.md](https://github.com/algorand-devrel/tealscript-algokit-template/blob/master/template_content/README.md) file for instructions on how to use it.
## DApp Frontend React Template
[Section titled “DApp Frontend React Template”](#dapp-frontend-react-template)
[DApp Frontend React Template Github Repo](https://github.com/algorandfoundation/algokit-react-frontend-template)
This template provides a baseline React web app for developing and integrating with any [ARC32](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0032.md) compliant Algorand smart contracts.
To use it [install AlgoKit](https://github.com/algorandfoundation/algokit-cli#readme) and then either pass in `-t react` to `algokit init` or select the `react` template interactively during `algokit init`.
```shell
algokit init -t react
# or
algokit init # and select DApp Frontend template
```
### Features
[Section titled “Features”](#features-1)
This template supports the following features:
* React web app with [Tailwind CSS](https://tailwindcss.com/) and [TypeScript](https://www.typescriptlang.org/)
* Styled framework agnostic CSS components using [DaisyUI](https://daisyui.com/).
* Starter jest unit tests for typescript functions. It can be turned off if not needed.
* Starter [playwright](https://playwright.dev/) tests for end to end testing. It can be turned off if not needed.
* Integration with [use-wallet](https://github.com/txnlab/use-wallet) for connecting to Algorand wallets such as Pera, Defly, and Exodus.
* Example of performing a transaction.
* Dotenv support for environment variables and a local-only KMD provider that can connect the frontend component to an `algokit localnet` instance (docker required).
* CI/CD pipeline using GitHub Actions (Vercel or Netlify for hosting)
### Getting started
[Section titled “Getting started”](#getting-started-2)
Once the template is instantiated, you can follow the `README.md` file to see instructions on how to use the template.
* [Instructions for Starter template](https://github.com/algorandfoundation/algokit-react-frontend-template/blob/main/examples/starter_react/README.md)
* [Instructions for Production template](https://github.com/algorandfoundation/algokit-react-frontend-template/blob/main/examples/production_react/README.md)
## Fullstack (Smart Contract + Frontend) Template
[Section titled “Fullstack (Smart Contract + Frontend) Template”](#fullstack-smart-contract--frontend-template)
[Fullstack (Smart Contract + Frontend) Template Github Repo](https://github.com/algorandfoundation/algokit-fullstack-template)
This full-stack template provides both a baseline React web app and a production-ready baseline for developing and deploying `Algorand Python` and `TypeScript` smart contracts. It’s suitable for developing and integrating with any [ARC32](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0032.md) compliant Algorand smart contracts.
To use this template, [install AlgoKit](https://github.com/algorandfoundation/algokit-cli#readme) and then either pass in `-t fullstack` to `algokit init` or select the relevant template interactively during `algokit init`.
```shell
algokit init -t fullstack
# or
algokit init # and select the Smart Contracts & DApp Frontend template
```
### Features
[Section titled “Features”](#features-2)
This template supports many features for developing full-stack applications using official AlgoKit templates. Using the full-stack template currently allows you to create a workspace that combines the following frontend template:
* [algokit-react-frontend-template](https://github.com/algorandfoundation/algokit-react-frontend-template) - A React web app with TypeScript, Tailwind CSS, and all Algorand-specific integrations pre-configured and ready for you to build.
And the following backend templates:
* [algokit-python-template](https://github.com/algorandfoundation/algokit-python-template) - An official starter for developing and deploying Algorand Python smart contracts.
* [algokit-tealscript-template](https://github.com/algorand-devrel/tealscript-algokit-template) - An official starter for developing and deploying TealScript smart contracts.
Initializing a fullstack algokit project will create an AlgoKit workspace with a frontend React web app and Algorand smart contract project inside the `projects` folder.
* .algokit.toml
* README.md
* {your\_workspace/project\_name}.code-workspace
* projects
* smart-contract
* frontend
# Project Structure
> Learn about the different types of AlgoKit projects and how to create them.
AlgoKit streamlines configuring components for development, testing, and deploying smart contracts to the blockchain and effortlessly sets up a project with all the necessary components. In this guide, we’ll explore what an AlgoKit project is and how you can use it to kickstart your own Algorand project.
## What is an AlgoKit Project?
[Section titled “What is an AlgoKit Project?”](#what-is-an-algokit-project)
In the context of AlgoKit, a “project” refers to a structured standalone or monorepo workspace that includes all the necessary components for developing, testing, and deploying Algorand applications, such as smart contracts, frontend applications, and any associated configurations.
## Two Types of AlgoKit Projects
[Section titled “Two Types of AlgoKit Projects”](#two-types-of-algokit-projects)
AlgoKit supports two main types of project structures: Workspaces and Standalone Projects. This flexibility caters to the diverse needs of developers, whether managing multiple related projects or focusing on a single application.
* **Monorepo Workspace**: This workspace is ideal for complex applications comprising multiple subprojects. It facilitates the organized management of these subprojects under a single root directory, streamlining dependency management and shared configurations.
* **Standalone Project**: This structure is suitable for simpler applications or when working on a single component. It offers straightforward project management, with each project residing in its own directory, independent of others.
## AlgoKit Monorepo Workspace
[Section titled “AlgoKit Monorepo Workspace”](#algokit-monorepo-workspace)
Workspaces are designed to manage multiple related projects under a single root directory. This approach benefits complex applications with multiple sub-projects, such as a smart contract and a corresponding frontend application. Workspaces help organize these sub-projects in a structured manner, making managing dependencies and shared configurations easier.
Simply put, workspaces contain multiple AlgoKit standalone project folders within the `projects` folder and manage them from a single root directory:
* .algokit.toml
* README.md
* {your\_workspace/project\_name}.code-workspace
* projects
* standalone-project-1
* standalone-project-2
### Creating an AlgoKit Monorepo Workspace
[Section titled “Creating an AlgoKit Monorepo Workspace”](#creating-an-algokit-monorepo-workspace)
To create an AlgoKit monorepo workspace, run the following command:
```shell
algokit init # Creates a workspace by default
# or
algokit init --workspace
```
Note
The `–-workspace` flag is enabled by default, so running `algokit init` will create an AlgoKit workspace.
### Adding a Sub-Project to an AlgoKit Workspace
[Section titled “Adding a Sub-Project to an AlgoKit Workspace”](#adding-a-sub-project-to-an-algokit-workspace)
Once established, new projects can be added to the workspace, allowing centralized management.
To add another sub-project within a workspace, run the following command at the root directory of the related AlgoKit workspace:
```shell
algokit init
```
Note
Please note that instantiating a workspace inside a workspace (aka ‘workspace nesting’) is not supported or recommended. When you want to add a new project to an existing workspace, run algokit init from the root of the workspace.
### Marking a Project as a Workspace
[Section titled “Marking a Project as a Workspace”](#marking-a-project-as-a-workspace)
To mark your project as a workspace, fill in the following in your `.algokit.toml` file:
```toml
[project]
type = 'workspace' # type specifying if the project is a workspace or standalone
projects_root_path = 'projects' # path to the root folder containing all sub-projects in the workspace
```
### VSCode optimizations
[Section titled “VSCode optimizations”](#vscode-optimizations)
AlgoKit has a set of minor optimizations for VSCode users that are useful to be aware of:
* Templates created with the `--workspace` flag automatically include a VSCode code-workspace file. New projects added to an AlgoKit workspace are also integrated into an existing VSCode workspace.
* Using the `--ide` flag with init triggers automatic prompts to open the project and, if available, the code workspace in VSCode.
### Handling of the .github Folder
[Section titled “Handling of the .github Folder”](#handling-of-the-github-folder)
A key aspect of using the `--workspace` flag is how the .github folder is managed. This folder, which contains GitHub-specific configurations, such as workflows and issue templates, are moved from the project directory to the root of the workspace. This move is necessary because GitHub does not recognize workflows located in subdirectories.
Here’s a simplified overview of what happens:
1. If a .github folder is found in your project, its contents are transferred to the workspace’s root .github folder.
2. Files with matching names in the destination are not overwritten; they’re skipped.
3. The original .github folder is removed if left empty after the move.
4. A notification is displayed advising you to review the moved .github contents to ensure everything is in order.
This process ensures that your GitHub configurations are appropriately recognized at the workspace level, allowing you to utilize GitHub Actions and other features seamlessly across your projects.
## Standalone Projects
[Section titled “Standalone Projects”](#standalone-projects)
Standalone projects are suitable for more straightforward applications or when working on a single component. This structure is straightforward, with each project residing in its directory, independent of others. Standalone projects are ideal for developers who prefer simplicity or focus on a single aspect of their application and are sure they will not need to add more sub-projects in the future.
### Creating a Standalone Project
[Section titled “Creating a Standalone Project”](#creating-a-standalone-project)
To create a standalone project, use the `--no-workspace` flag during initialization.
```shell
algokit init -–no-workspace
```
This instructs AlgoKit to bypass the workspace structure and set up the project as an isolated entity.
### Marking a Project as a Standalone Project
[Section titled “Marking a Project as a Standalone Project”](#marking-a-project-as-a-standalone-project)
To mark your project as a standalone project, fill in the following in your .algokit.toml file:
```toml
[project]
type = {'backend' | 'contract' | 'frontend'} # currently support 3 generic categories for standalone projects
name = 'my-project' # unique name for the project inside the workspace
```
Note
We recommend using workspaces for most projects (hence enabled by default), as it provides a more organized and scalable approach to managing multiple sub-projects. However, standalone projects are a great choice for simple applications, or when you are certain you will not need to add more sub-projects in the future. For such cases, append `--no-workspace` when using the algokit init command. For more details on the init command, please refer to [init](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/init.md) command docs.
Both workspaces and standalone projects are fully supported by AlgoKit’s suite of tools, ensuring developers can choose the structure that best fits their workflow without compromising on functionality.
# Algorand transaction subscription / indexing
## Quick start
[Section titled “Quick start”](#quick-start)
```{testcode}
# Import necessary modules
from algokit_subscriber import AlgorandSubscriber
from algosdk.v2client import algod
from algokit_utils import get_algod_client, get_algonode_config
# Create an Algod client
algod_client = get_algod_client(get_algonode_config("testnet", "algod", "")) # testnet used for demo purposes
# Create subscriber (example with filters)
subscriber = AlgorandSubscriber(
config={
"filters": [
{
"name": "filter1",
"filter": {
"type": "pay",
"sender": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ",
},
},
],
"watermark_persistence": {
"get": lambda: 0,
"set": lambda x: None
},
"sync_behaviour": "skip-sync-newest",
"max_rounds_to_sync": 100,
},
algod_client=algod_client,
)
# Set up subscription(s)
subscriber.on("filter1", lambda transaction, _: print(f"Received transaction: {transaction['id']}"))
# Set up error handling
subscriber.on_error(lambda error, _: print(f"Error occurred: {error}"))
# Either: Start the subscriber (if in long-running process)
# subscriber.start()
# OR: Poll the subscriber (if in cron job / periodic lambda)
result = subscriber.poll_once()
print(f"Polled {len(result['subscribed_transactions'])} transactions")
```
```{testoutput}
Polled 0 transactions
```
## Capabilities
[Section titled “Capabilities”](#capabilities)
* [Quick start](#quick-start)
* [Capabilities](#capabilities)
* [Notification *and* indexing](#notification-and-indexing)
* [Low latency processing](#low-latency-processing)
* [Watermarking and resilience](#watermarking-and-resilience)
* [Extensive subscription filtering](#extensive-subscription-filtering)
* [ARC-28 event subscription and reads](#arc-28-event-subscription-and-reads)
* [First-class inner transaction support](#first-class-inner-transaction-support)
* [State-proof support](#state-proof-support)
* [Simple programming model](#simple-programming-model)
* [Easy to deploy](#easy-to-deploy)
* [Fast initial index](#fast-initial-index)
* [Entry points](#entry-points)
* [Reference docs](#reference-docs)
* [Emit ARC-28 events](#emit-arc-28-events)
* [Algorand Python](#algorand-python)
* [TealScript](#tealscript)
* [PyTEAL](#pyteal)
* [TEAL](#teal)
* [Next steps](#next-steps)
### Notification *and* indexing
[Section titled “Notification and indexing”](#notification-and-indexing)
This library supports the ability to stay at the tip of the chain and power notification / alerting type scenarios through the use of the `sync_behaviour` parameter in both [`AlgorandSubscriber`](./subscriber) and [`get_subscribed_transactions`](./subscriptions). For example to stay at the tip of the chain for notification/alerting scenarios you could do:
```python
subscriber = AlgorandSubscriber({"sync_behavior": 'skip-sync-newest', "max_rounds_to_sync": 100, ...}, ...)
# or:
get_subscribed_transactions({"sync_behaviour": "skip-sync-newest", "max_rounds_to_sync": 100, ...}, ...)
```
The `current_round` parameter (availble when calling `get_subscribed_transactions`) can be used to set the tip of the chain. If not specified, the tip will be automatically detected. Whilst this is generally not needed, it is useful in scenarios where the tip is being detected as part of another process and you only want to sync to that point and no further.
The `max_rounds_to_sync` parameter controls how many rounds it will process when first starting when it’s not caught up to the tip of the chain. While it’s caught up to the chain it will keep processing as many rounds as are available from the last round it processed to when it next tries to sync (see below for how to control that).
If you expect your service will resiliently always stay running, should never get more than `max_rounds_to_sync` from the tip of the chain, there is a problem if it processes old records and you’d prefer it throws an error when losing track of the tip of the chain rather than continue or skip to newest you can set the `sync_behaviour` parameter to `fail`.
The `sync_behaviour` parameter can also be set to `sync-oldest-start-now` if you want to process all transactions once you start alerting/notifying. This requires that your service needs to keep running otherwise it could fall behind and start processing old records / take a while to catch back up with the tip of the chain. This is also a useful setting if you are creating an indexer that only needs to process from the moment the indexer is deployed rather than from the beginning of the chain. Note: this requires the [initial watermark](#watermarking-and-resilience) to start at 0 to work.
The `sync_behaviour` parameter can also be set to `sync-oldest`, which is a more traditional indexing scenario where you want to process every single block from the beginning of the chain. This can take a long time to process by default (e.g. days), noting there is a [fast catchup feature](#fast-initial-index). If you don’t want to start from the beginning of the chain you can [set the initial watermark](#watermarking-and-resilience) to a higher round number than 0 to start indexing from that point.
### Low latency processing
[Section titled “Low latency processing”](#low-latency-processing)
You can control the polling semantics of the library when using the [`AlgorandSubscriber`](./subscriber) by either specifying the `frequency_in_seconds` parameter to control the duration between polls or you can use the `wait_for_block_when_at_tip` parameter to indicate the subscriber should [call algod to ask it to inform the subscriber when a new round is available](https://dev.algorand.co/reference/rest-apis/algod/#waitforblock) so the subscriber can immediately process that round with a much lower-latency. When this mode is set, the subscriber intelligently uses this option only when it’s caught up to the tip of the chain, but otherwise uses `frequency_in_seconds` while catching up to the tip of the chain.
e.g.
```python
# When catching up to tip of chain will pool every 1s for the next 1000 blocks, but when caught up will poll algod for a new block so it can be processed immediately with low latency
subscriber = AlgorandSubscriber(config={
"frequency_in_seconds": 1,
"wait_for_block_when_at_tip": True,
"max_rounds_to_sync": 1000,
# ... other configuration options
}, ...)
...
subscriber.start()
```
If you are using [`get_subscribed_transactions`](./subscriptions) or the `pollOnce` method on `AlgorandSubscriber` then you can use your infrastructure and/or surrounding orchestration code to take control of the polling duration.
If you want to manually run code that waits for a given round to become available you can execute the following algosdk code:
```python
algod.status_after_block(round_number_to_wait_for)
```
### Watermarking and resilience
[Section titled “Watermarking and resilience”](#watermarking-and-resilience)
You can create reliable syncing / indexing services through a simple round watermarking capability that allows you to create resilient syncing services that can recover from an outage.
This works through the use of the `watermark_persistence` parameter in [`AlgorandSubscriber`](./subscriber) and `watermark` parameter in [`get_subscribed_transactions`](./subscriptions):
```python
def get_saved_watermark() -> int:
# Return the watermark from a persistence store e.g. database, redis, file system, etc.
pass
def save_watermark(new_watermark: int) -> None:
# Save the watermark to a persistence store e.g. database, redis, file system, etc.
pass
...
subscriber = AlgorandSubscriber({
"watermark_persistence": {
"get": get_saved_watermark,
"set": save_watermark
},
# ... other configuration options
}, ...)
# or:
watermark = get_saved_watermark()
result = get_subscribed_transactions(watermark=watermark, ...)
save_watermark(result.new_watermark)
```
By using a persistence store, you can gracefully respond to an outage of your subscriber. The next time it starts it will pick back up from the point where it last persisted. It’s worth noting this provides at least once delivery semantics so you need to handle duplicate events.
Alternatively, if you want to create at most once delivery semantics you could use the [transactional outbox pattern](https://microservices.io/patterns/data/transactional-outbox.html) and wrap a unit of work from a ACID persistence store (e.g. a SQL database with a serializable or repeatable read transaction) around the watermark retrieval, transaction processing and watermark persistence so the processing of transactions and watermarking of a single poll happens in a single atomic transaction. In this model, you would then process the transactions in a separate process from the persistence store (and likely have a flag on each transaction to indicate if it has been processed or not). You would need to be careful to ensure that you only have one subscriber actively running at a time to guarantee this delivery semantic. To ensure resilience you may want to have multiple subscribers running, but a primary node that actually executes based on retrieval of a distributed semaphore / lease.
If you are doing a quick test or creating an ephemeral subscriber that just needs to exist in-memory and doesn’t need to recover resiliently (useful with `sync_behaviour` of `skip-sync-newest` for instance) then you can use an in-memory variable instead of a persistence store, e.g.:
```python
watermark = 0
subscriber = AlgorandSubscriber(
config={
"watermark_persistence": {
"get": lambda: watermark,
"set": lambda new_watermark: globals().update(watermark=new_watermark)
},
# ... other configuration options
},
# ... other arguments
)
# or:
watermark = 0
result = get_subscribed_transactions(watermark=watermark, ...)
watermark = result.new_watermark
```
### Extensive subscription filtering
[Section titled “Extensive subscription filtering”](#extensive-subscription-filtering)
This library has extensive filtering options available to you so you can have fine-grained control over which transactions you are interested in.
There is a core type that is used to specify the filters [`TransactionFilter`](subscriptions#transactionfilter):
```python
subscriber = AlgorandSubscriber(config={'filters': [{'name': 'filterName', 'filter': {# Filter properties}}], ...}, ...)
# or:
get_subscribed_transactions(filters=[{'name': 'filterName', 'filter': {# Filter properties}}], ...)
```
Currently this allows you filter based on any combination (AND logic) of:
* Transaction type e.g. `filter: { type: "axfer" }` or `filter: {type: ["axfer", "pay"] }`
* Account (sender and receiver) e.g. `filter: { sender: "ABCDE..F" }` or `filter: { sender: ["ABCDE..F", "ZYXWV..A"] }` and `filter: { receiver: "12345..6" }` or `filter: { receiver: ["ABCDE..F", "ZYXWV..A"] }`
* Note prefix e.g. `filter: { note_prefix: "xyz" }`
* Apps
* ID e.g. `filter: { appId: 54321 }` or `filter: { appId: [54321, 12345] }`
* Creation e.g. `filter: { app_create: true }`
* Call on-complete(s) e.g. `filter: { app_on_complete: 'optin' }` or `filter: { app_on_complete: ['optin', 'noop'] }`
* ARC4 method signature(s) e.g. `filter: { method_signature: "MyMethod(uint64,string)" }` or `filter: { method_signature: ["MyMethod(uint64,string)uint64", "MyMethod2(unit64)"] }`
* Call arguments e.g.
```python
"filter": {
'app_call_arguments_match': lambda app_call_arguments:
len(app_call_arguments) > 1 and
app_call_arguments[1].decode('utf-8') == 'hello_world'
}
```
* Emitted ARC-28 event(s) e.g.
```python
'filter': {
'arc28_events': [{ 'group_name': "group1", 'event_name': "MyEvent" }];
}
```
Note: For this to work you need to [specify ARC-28 events in the subscription config](#arc-28-event-subscription-and-reads).
* Assets
* ID e.g. `'filter': { 'asset_id': 123456 }` or `'filter': { 'asset_id': [123456, 456789] }`
* Creation e.g. `'filter': { 'asset_create': true }`
* Amount transferred (min and/or max) e.g. `'filter': { 'type': 'axfer', 'min_amount': 1, 'max_amount': 100 }`
* Balance changes (asset ID, sender, receiver, close to, min and/or max change) e.g. `filter: { 'balance_changes': [{'asset_id': [15345, 36234], 'roles': [BalanceChangerole.Sender], 'address': "ABC...", 'min_amount': 1, 'max_amount': 2}]}`
* Algo transfers (pay transactions)
* Amount transferred (min and/or max) e.g. `'filter': { 'type': 'pay', 'min_amount': 1, 'max_amount': 100 }`
* Balance changes (sender, receiver, close to, min and/or max change) e.g. `'filter': { 'balance_changes': [{'roles': [BalanceChangeRole.Sender], 'address': "ABC...", 'min_amount': 1, 'max_amount': 2}]}`
You can supply multiple, named filters via the [`NamedTransactionFilter`](subscriptions#namedtransactionfilter) type. When subscribed transactions are returned each transaction will have a `filters_matched` property that will have an array of any filter(s) that caused that transaction to be returned. When using [`AlgorandSubscriber`](./subscriber), you can subscribe to events that are emitted with the filter name.
### ARC-28 event subscription and reads
[Section titled “ARC-28 event subscription and reads”](#arc-28-event-subscription-and-reads)
You can [subscribe to ARC-28 events](#extensive-subscription-filtering) for a smart contract, similar to how you can [subscribe to events in Ethereum](https://docs.web3js.org/guides/events_subscriptions/).
Furthermore, you can receive any ARC-28 events that a smart contract call you subscribe to emitted in the [subscribed transaction object](subscriptions#subscribedtransaction).
Both subscription and receiving ARC-28 events work through the use of the `arc28Events` parameter in [`AlgorandSubscriber`](./subscriber) and [`get_subscribed_transactions`](./subscriptions):
```python
group1_events = {
"groupName": "group1",
"events": [
{
"name": "MyEvent",
"args": [
{"type": "uint64"},
{"type": "string"},
]
}
]
}
subscriber = AlgorandSubscriber(arc28_events=[group1_events], ...)
# or:
result = await get_subscribed_transactions(arc28_events=[group1_events], ...)
```
The `Arc28EventGroup` type has the following definition:
```python
class Arc28EventGroup(TypedDict):
"""
Specifies a group of ARC-28 event definitions along with instructions for when to attempt to process the events.
"""
group_name: str
"""The name to designate for this group of events."""
process_for_app_ids: list[int]
"""Optional list of app IDs that this event should apply to."""
process_transaction: NotRequired[Callable[[TransactionResult], bool]]
"""Optional predicate to indicate if these ARC-28 events should be processed for the given transaction."""
continue_on_error: bool
"""Whether or not to silently (with warning log) continue if an error is encountered processing the ARC-28 event data; default = False."""
events: list[Arc28Event]
"""The list of ARC-28 event definitions."""
class Arc28Event(TypedDict):
"""
The definition of metadata for an ARC-28 event as per the ARC-28 specification.
"""
name: str
"""The name of the event"""
desc: NotRequired[str]
"""An optional, user-friendly description for the event"""
args: list[Arc28EventArg]
"""The arguments of the event, in order"""
```
Each group allows you to apply logic to the applicability and processing of a set of events. This structure allows you to safely process the events from multiple contracts in the same subscriber, or perform more advanced filtering logic to event processing.
When specifying an [ARC-28 event filter](#extensive-subscription-filtering), you specify both the `group_name` and `event_name`(s) to narrow down what event(s) you want to subscribe to.
If you want to emit an ARC-28 event from your smart contract you can follow the [below code examples](#emit-arc-28-events).
### First-class inner transaction support
[Section titled “First-class inner transaction support”](#first-class-inner-transaction-support)
When you subscribe to transactions any subscriptions that cover an inner transaction will pick up that inner transaction and [return](subscriptions#subscribedtransaction) it to you correctly.
Note: the behaviour Algorand Indexer has is to return the parent transaction, not the inner transaction; this library will always return the actual transaction you subscribed to.
If you [receive](subscriptions#subscribedtransaction) an inner transaction then there will be a `parent_transaction_id` field populated that allows you to see that it was an inner transaction and how to identify the parent transaction.
The `id` of an inner transaction will be set to `{parent_transaction_id}/inner/{index-of-child-within-parent}` where `{index-of-child-within-parent}` is calculated based on uniquely walking the tree of potentially nested inner transactions. [This transaction in Allo.info](https://allo.info/tx/group/cHiEEvBCRGnUhz9409gHl%2Fvn00lYDZnJoppC3YexRr0%3D) is a good illustration of how inner transaction indexes are allocated (this library uses the same approach).
All [returned](subscriptions#subscribedtransaction) transactions will have an `inner-txns` property with any inner transactions of that transaction populated (recursively).
The `intra-round-offset` field in a [subscribed transaction or inner transaction within](subscriptions#subscribedtransaction) is calculated by walking the full tree depth-first from the first transaction in the block, through any inner transactions recursively starting from an index of 0. This algorithm matches the one in Algorand Indexer and ensures that all transactions have a unique index, but the top level transaction in the block don’t necessarily have a sequential index.
### State-proof support
[Section titled “State-proof support”](#state-proof-support)
You can subscribe to [state proof](https://dev.algorand.co/concepts/protocol/stateproofs) transactions using this subscriber library. At the time of writing state proof transactions are not supported by algosdk v2 and custom handling has been added to ensure this valuable type of transaction can be subscribed to.
The field level documentation of the [returned state proof transaction](subscriptions#subscribedtransaction) is comprehensively documented via [AlgoKit Utils](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/src/types/indexer.ts#L277).
By exposing this functionality, this library can be used to create a [light client](https://dev.algorand.co/concepts/protocol/stateproofs).
### Simple programming model
[Section titled “Simple programming model”](#simple-programming-model)
This library is easy to use and consume through [easy to use, type-safe TypeScript methods and objects](#entry-points) and subscribed transactions have a [comprehensive and familiar model type](subscriptions#subscribedtransaction) with all relevant/useful information about that transaction (including things like transaction id, round number, created asset/app id, app logs, etc.) modelled on the indexer data model (which is used regardless of whether the transactions come from indexer or algod so it’s a consistent experience).
For more examples of how to use it see the [relevant documentation](subscriber).
### Easy to deploy
[Section titled “Easy to deploy”](#easy-to-deploy)
Because the [entry points](#entry-points) of this library are simple TypeScript methods to execute it you simply need to run it in a valid JavaScript execution environment. For instance, you could run it within a web browser if you want a user facing app to show real-time transaction notifications in-app, or in a Node.js process running in the myriad of ways Node.js can be run.
Because of that, you have full control over how you want to deploy and use the subscriber; it will work with whatever persistence (e.g. sql, no-sql, etc.), queuing/messaging (e.g. queues, topics, buses, web hooks, web sockets) and compute (e.g. serverless periodic lambdas, continually running containers, virtual machines, etc.) services you want to use.
### Fast initial index
[Section titled “Fast initial index”](#fast-initial-index)
When [subscribing to the chain](#notification-and-indexing) for the purposes of building an index you often will want to start at the beginning of the chain or a substantial time in the past when the given solution you are subscribing for started.
This kind of catch up takes days to process since algod only lets you retrieve a single block at a time and retrieving a block takes 0.5-1s. Given there are millions of blocks in MainNet it doesn’t take long to do the math to see why it takes so long to catch up.
This subscriber library has a unique, optional indexer catch up mode that allows you to use indexer to catch up to the tip of the chain in seconds or minutes rather than days for your specific filter.
This is really handy when you are doing local development or spinning up a new environment and don’t want to wait for days.
To make use of this feature, you need to set the `sync_behaviour` config to `catchup-with-indexer` and ensure that you pass `indexer` in to the [entry point](#entry-points) along with `algod`.
Any [filter](#extensive-subscription-filtering) you apply will be seamlessly translated to indexer searches to get the historic transactions in the most efficient way possible based on the apis indexer exposes. Once the subscriber is within `max_rounds_to_sync` of the tip of the chain it will switch to subscribing using `algod`.
To see this in action, you can run the Data History Museum example in this repository against MainNet and see it sync millions of rounds in seconds.
The indexer catchup isn’t magic - if the filter you are trying to catch up with generates an enormous number of transactions (e.g. hundreds of thousands or millions) then it will run very slowly and has the potential for running out of compute and memory time depending on what the constraints are in the deployment environment you are running in. In that instance though, there is a config parameter you can use `max_indexer_rounds_to_sync` so you can break the indexer catchup into multiple “polls” e.g. 100,000 rounds at a time. This allows a smaller batch of transactions to be retrieved and persisted in multiple batches.
To understand how the indexer behaviour works to know if you are likely to generate a lot of transactions it’s worth understanding the architecture of the indexer catchup; indexer catchup runs in two stages:
1. **Pre-filtering**: Any filters that can be translated to the [indexer search transactions endpoint](https://dev.algorand.co/reference/rest-apis/indexer/#lookuptransaction). This query is then run between the rounds that need to be synced and paginated in the max number of results (1000) at a time until all of the transactions are retrieved. This ensures we get round-based transactional consistency. This is the filter that can easily explode out though and take a long time when using indexer catchup. For avoidance of doubt, the following filters are the ones that are converted to a pre-filter:
* `sender` (single value)
* `receiver` (single value)
* `type` (single value)
* `note_prefix`
* `app_id` (single value)
* `asset_id` (single value)
* `min_amount` (and `type = pay` or `assetId` provided)
* `max_amount` (and `maxAmount < Number.MAX_SAFE_INTEGER` and `type = pay` or (`assetId` provided and `minAmount > 0`))
2. **Post-filtering**: All remaining filters are then applied in-memory to the resulting list of transactions that are returned from the pre-filter before being returned as subscribed transactions.
## Entry points
[Section titled “Entry points”](#entry-points)
There are two entry points into the subscriber functionality. The lower level [`get_subscribed_transactions`](./subscriptions) method that contains the raw subscription logic for a single “poll”, and the [`AlgorandSubscriber`](./subscriber) class that provides a higher level interface that is easier to use and takes care of a lot more orchestration logic for you (particularly around the ability to continuously poll).
Both are first-class supported ways of using this library, but we generally recommend starting with the `AlgorandSubscriber` since it’s easier to use and will cover the majority of use cases.
## Reference docs
[Section titled “Reference docs”](#reference-docs)
[See reference docs](./code/README).
## Emit ARC-28 events
[Section titled “Emit ARC-28 events”](#emit-arc-28-events)
To emit ARC-28 events from your smart contract you can use the following syntax.
### Algorand Python
[Section titled “Algorand Python”](#algorand-python)
```python
@arc4.abimethod
def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None:
arc4.emit("MyEvent", a, b)
```
OR:
```python
class MyEvent(arc4.Struct):
a: arc4.String
b: arc4.UInt64
# ...
@arc4.abimethod
def emit_swapped(self, a: arc4.String, b: arc4.UInt64) -> None:
arc4.emit(MyEvent(a, b))
```
### TealScript
[Section titled “TealScript”](#tealscript)
```typescript
MyEvent = new EventLogger<{
stringField: string
intField: uint64
}>();
// ...
this.MyEvent.log({
stringField: "a"
intField: 2
})
```
### PyTEAL
[Section titled “PyTEAL”](#pyteal)
```python
class MyEvent(pt.abi.NamedTuple):
stringField: pt.abi.Field[pt.abi.String]
intField: pt.abi.Field[pt.abi.Uint64]
# ...
@app.external()
def myMethod(a: pt.abi.String, b: pt.abi.Uint64) -> pt.Expr:
# ...
return pt.Seq(
# ...
(event := MyEvent()).set(a, b),
pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), event._stored_value.load())),
pt.Approve(),
)
```
Note: if your event doesn’t have any dynamic ARC-4 types in it then you can simplify that to something like this:
```python
pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), a.get(), pt.Itob(b.get()))),
```
### TEAL
[Section titled “TEAL”](#teal)
```teal
method "MyEvent(byte[],uint64)"
frame_dig 0 // or any other command to put the ARC-4 encoded bytes for the event on the stack
concat
log
```
## Next steps
[Section titled “Next steps”](#next-steps)
To dig deeper into the capabilities of `algokit-subscriber`, continue with the following sections.
```{toctree}
---
maxdepth: 2
caption: Contents
hidden: true
---
subscriber
subscriptions
api
```
# AlgorandSubscriber
`AlgorandSubscriber` is a class that allows you to easily subscribe to the Algorand Blockchain, define a series of events that you are interested in, and react to those events.
## Creating a subscriber
[Section titled “Creating a subscriber”](#creating-a-subscriber)
To create an `AlgorandSubscriber` you can use the constructor:
```python
class AlgorandSubscriber:
def __init__(self, config: AlgorandSubscriberConfig, algod_client: AlgodClient, indexer_client: IndexerClient | None = None):
"""
Create a new `AlgorandSubscriber`.
:param config: The subscriber configuration
:param algod_client: An algod client
:param indexer_client: An (optional) indexer client; only needed if `subscription.sync_behaviour` is `catchup-with-indexer`
"""
```
**TODO: Link to config type**
`watermark_persistence` allows you to ensure reliability against your code having outages since you can persist the last block your code processed up to and then provide it again the next time your code runs.
`max_rounds_to_sync` and `sync_behaviour` allow you to control the subscription semantics as your code falls behind the tip of the chain (either on first run or after an outage).
`frequency_in_seconds` allows you to control the polling frequency and by association your latency tolerance for new events once you’ve caught up to the tip of the chain. Alternatively, you can set `wait_for_block_when_at_tip` to get the subscriber to ask algod to tell it when there is a new block ready to reduce latency when it’s caught up to the tip of the chain.
`arc28_events` are any [ARC-28 event definitions](subscriptions#arc-28-events).
Filters defines the different subscription(s) you want to make, and is defined by the following interface:
```python
class NamedTransactionFilter(TypedDict):
"""Specify a named filter to apply to find transactions of interest."""
name: str
"""The name to give the filter."""
filter: TransactionFilter
"""The filter itself."""
class SubscriberConfigFilter(NamedTransactionFilter):
"""A single event to subscribe to / emit."""
mapper: NotRequired[Callable[[list['SubscribedTransaction']], list[Any]]]
"""
An optional data mapper if you want the event data to take a certain shape when subscribing to events with this filter name.
"""
```
The event name is a unique name that describes the event you are subscribing to. The [filter](subscriptions#transactionfilter) defines how to interpret transactions on the chain as being “collected” by that event and the mapper is an optional ability to map from the raw transaction to a more targeted type for your event subscribers to consume.
## Subscribing to events
[Section titled “Subscribing to events”](#subscribing-to-events)
Once you have created the `AlgorandSubscriber`, you can register handlers/listeners for the filters you have defined, or each poll as a whole batch.
You can do this via the `on`, `on_batch` and `on_poll` methods:
```python
def on(self, filter_name: str, listener: EventListener) -> 'AlgorandSubscriber':
"""
Register an event handler to run on every subscribed transaction matching the given filter name.
"""
def on_batch(self, filter_name: str, listener: EventListener) -> 'AlgorandSubscriber':
"""
Register an event handler to run on all subscribed transactions matching the given filter name
for each subscription poll.
"""
def on_before_poll(self, listener: EventListener) -> 'AlgorandSubscriber':
"""
Register an event handler to run before each subscription poll.
"""
def on_poll(self, listener: EventListener) -> 'AlgorandSubscriber':
"""
Register an event handler to run after each subscription poll.
"""
def on_error(self, listener: EventListener) -> 'AlgorandSubscriber':
"""
Register an event handler to run when an error occurs.
"""
```
The `EventListener` type is defined as:
```python
EventListener = Callable[[SubscribedTransaction, str], None]
"""
A function that takes a SubscribedTransaction and the event name.
"""
```
When you define an event listener it will be called, one-by-one in the order the registrations occur.
If you call `on_batch` it will be called first, with the full set of transactions that were found in the current poll (0 or more). Following that, each transaction in turn will then be passed to the listener(s) that subscribed with `on` for that event.
The default type that will be received is a `SubscribedTransaction`, which can be imported like so:
```python
from algokit_subscriber import SubscribedTransaction
```
See the [detail about this type](subscriptions#subscribedtransaction).
Alternatively, if you defined a mapper against the filter then it will be applied before passing the objects through.
If you call `on_poll` it will be called last (after all `on` and `on_batch` listeners) for each poll, with the full set of transactions for that poll and [metadata about the poll result](./subscriptions#transactionsubscriptionresult). This allows you to process the entire poll batch in one transaction or have a hook to call after processing individual listeners (e.g. to commit a transaction).
If you want to run code before a poll starts (e.g. to log or start a transaction) you can do so with `on_before_poll`.
## Poll the chain
[Section titled “Poll the chain”](#poll-the-chain)
There are two methods to poll the chain for events: `pollOnce` and `start`:
```python
def poll_once(self) -> TransactionSubscriptionResult:
"""
Execute a single subscription poll.
"""
def start(self, inspect: Callable | None = None, suppress_log: bool = False) -> None: # noqa: FBT001, FBT002
"""
Start the subscriber in a loop until `stop` is called.
This is useful when running in the context of a long-running process / container.
If you want to inspect or log what happens under the covers you can pass in an `inspect` callable that will be called for each poll.
"""
```
`poll_once` is useful when you want to take control of scheduling the different polls, such as when running a Lambda on a schedule or a process via cron, etc. - it will do a single poll of the chain and return the result of that poll.
`start` is useful when you have a long-running process or container and you want it to loop infinitely at the specified polling frequency from the constructor config. If you want to inspect or log what happens under the covers you can pass in an `inspect` lambda that will be called for each poll.
If you use `start` then you can stop the polling by calling `stop`, which will ensure everything is cleaned up nicely.
## Handling errors
[Section titled “Handling errors”](#handling-errors)
To handle errors, you can register error handlers/listeners using the `on_error` method. This works in a similar way to the other `on*` methods.
When no error listeners have been registered, a default listener is used to re-throw any exception, so they can be caught by global uncaught exception handlers. Once an error listener has been registered, the default listener is removed and it’s the responsibility of the registered error listener to perform any error handling.
## Examples
[Section titled “Examples”](#examples)
See the [main README](../README#examples).
# get_subscribed_transactions
`get_subscribed_transactions` is the core building block at the centre of this library. It’s a simple, but flexible mechanism that allows you to enact a single subscription “poll” of the Algorand blockchain.
This is a lower level building block, you likely don’t want to use it directly, but instead use the [`AlgorandSubscriber` class](./subscriber.ts).
You can use this method to orchestrate everything from an index of all relevant data from the start of the chain through to simply subscribing to relevant transactions as they emerge at the tip of the chain. It allows you to have reliable at least once delivery even if your code has outages through the use of watermarking.
```python
def get_subscribed_transactions(
subscription: TransactionSubscriptionParams,
algod: AlgodClient,
indexer: IndexerClient | None = None
) -> TransactionSubscriptionResult:
"""
Executes a single pull/poll to subscribe to transactions on the configured Algorand
blockchain for the given subscription context.
"""
```
## TransactionSubscriptionParams
[Section titled “TransactionSubscriptionParams”](#transactionsubscriptionparams)
Specifying a subscription requires passing in a `TransactionSubscriptionParams` object, which configures the behaviour:
```python
class CoreTransactionSubscriptionParams(TypedDict):
filters: list['NamedTransactionFilter']
"""The filter(s) to apply to find transactions of interest."""
arc28_events: NotRequired[list['Arc28EventGroup']]
"""Any ARC-28 event definitions to process from app call logs"""
max_rounds_to_sync: NotRequired[int | None]
"""
The maximum number of rounds to sync from algod for each subscription pull/poll.
Defaults to 500.
"""
max_indexer_rounds_to_sync: NotRequired[int | None]
"""
The maximum number of rounds to sync from indexer when using `sync_behaviour: 'catchup-with-indexer'`.
"""
sync_behaviour: str
"""
If the current tip of the configured Algorand blockchain is more than `max_rounds_to_sync`
past `watermark` then how should that be handled.
"""
class TransactionSubscriptionParams(CoreTransactionSubscriptionParams):
watermark: int
"""
The current round watermark that transactions have previously been synced to.
"""
current_round: NotRequired[int]
"""
The current tip of the configured Algorand blockchain.
If not provided, it will be resolved on demand.
"""
```
## TransactionFilter
[Section titled “TransactionFilter”](#transactionfilter)
The [`filters` parameter](#transactionsubscriptionparams) allows you to specify a set of filters to return a subset of transactions you are interested in. Each filter contains a `filter` property of type `TransactionFilter`, which matches the following type:
```typescript
class TransactionFilter(TypedDict):
type: NotRequired[str | list[str]]
"""Filter based on the given transaction type(s)."""
sender: NotRequired[str | list[str]]
"""Filter to transactions sent from the specified address(es)."""
receiver: NotRequired[str | list[str]]
"""Filter to transactions being received by the specified address(es)."""
note_prefix: NotRequired[str | bytes]
"""Filter to transactions with a note having the given prefix."""
app_id: NotRequired[int | list[int]]
"""Filter to transactions against the app with the given ID(s)."""
app_create: NotRequired[bool]
"""Filter to transactions that are creating an app."""
app_on_complete: NotRequired[str | list[str]]
"""Filter to transactions that have given on complete(s)."""
asset_id: NotRequired[int | list[int]]
"""Filter to transactions against the asset with the given ID(s)."""
asset_create: NotRequired[bool]
"""Filter to transactions that are creating an asset."""
min_amount: NotRequired[int]
"""
Filter to transactions where the amount being transferred is greater
than or equal to the given minimum (microAlgos or decimal units of an ASA if type: axfer).
"""
max_amount: NotRequired[int]
"""
Filter to transactions where the amount being transferred is less than
or equal to the given maximum (microAlgos or decimal units of an ASA if type: axfer).
"""
method_signature: NotRequired[str | list[str]]
"""
Filter to app transactions that have the given ARC-0004 method selector(s) for
the given method signature as the first app argument.
"""
app_call_arguments_match: NotRequired[Callable[[list[bytes] | None], bool]]
"""Filter to app transactions that meet the given app arguments predicate."""
arc28_events: NotRequired[list[dict[str, str]]]
"""
Filter to app transactions that emit the given ARC-28 events.
Note: the definitions for these events must be passed in to the subscription config via `arc28_events`.
"""
balance_changes: NotRequired[list[dict[str, Union[int, list[int], str, list[str], 'BalanceChangeRole', list['BalanceChangeRole']]]]]
"""Filter to transactions that result in balance changes that match one or more of the given set of balance changes."""
custom_filter: NotRequired[Callable[[TransactionResult], bool]]
"""Catch-all custom filter to filter for things that the rest of the filters don't provide."""
```
Each filter you provide within this type will apply an AND logic between the specified filters, e.g.
```typescript
"filter": {
"type": "axfer",
"sender": "ABC..."
}
```
Will return transactions that are `axfer` type AND have a sender of `"ABC..."`.
### NamedTransactionFilter
[Section titled “NamedTransactionFilter”](#namedtransactionfilter)
You can specify multiple filters in an array, where each filter is a `NamedTransactionFilter`, which consists of:
```python
class NamedTransactionFilter(TypedDict):
"""Specify a named filter to apply to find transactions of interest."""
name: str
"""The name to give the filter."""
filter: TransactionFilter
"""The filter itself."""
```
This gives you the ability to detect which filter got matched when a transaction is returned, noting that you can use the same name multiple times if there are multiple filters (aka OR logic) that comprise the same logical filter.
## Arc28EventGroup
[Section titled “Arc28EventGroup”](#arc28eventgroup)
The [`arc28_events` parameter](#transactionsubscriptionparams) allows you to define any ARC-28 events that may appear in subscribed transactions so they can either be subscribed to, or be processed and added to the resulting [subscribed transaction object](#subscribedtransaction).
## TransactionSubscriptionResult
[Section titled “TransactionSubscriptionResult”](#transactionsubscriptionresult)
The result of calling `get_subscribed_transactions` is a `TransactionSubscriptionResult`:
```python
class TransactionSubscriptionResult(TypedDict):
"""The result of a single subscription pull/poll."""
synced_round_range: tuple[int, int]
"""The round range that was synced from/to"""
current_round: int
"""The current detected tip of the configured Algorand blockchain."""
starting_watermark: int
"""The watermark value that was retrieved at the start of the subscription poll."""
new_watermark: int
"""
The new watermark value to persist for the next call to
`get_subscribed_transactions` to continue the sync.
Will be equal to `synced_round_range[1]`. Only persist this
after processing (or in the same atomic transaction as)
subscribed transactions to keep it reliable.
"""
subscribed_transactions: list['SubscribedTransaction']
"""
Any transactions that matched the given filter within
the synced round range. This substantively uses the indexer transaction
format to represent the data with some additional fields.
"""
block_metadata: NotRequired[list['BlockMetadata']]
"""
The metadata about any blocks that were retrieved from algod as part
of the subscription poll.
"""
class BlockMetadata(TypedDict):
"""Metadata about a block that was retrieved from algod."""
hash: NotRequired[str | None]
"""The base64 block hash."""
round: int
"""The round of the block."""
timestamp: int
"""Block creation timestamp in seconds since epoch"""
genesis_id: str
"""The genesis ID of the chain."""
genesis_hash: str
"""The base64 genesis hash of the chain."""
previous_block_hash: NotRequired[str | None]
"""The base64 previous block hash."""
seed: str
"""The base64 seed of the block."""
rewards: NotRequired['BlockRewards']
"""Fields relating to rewards"""
parent_transaction_count: int
"""Count of parent transactions in this block"""
full_transaction_count: int
"""Full count of transactions and inner transactions (recursively) in this block."""
txn_counter: int
"""Number of the next transaction that will be committed after this block. It is 0 when no transactions have ever been committed (since TxnCounter started being supported)."""
transactions_root: str
"""
Root of transaction merkle tree using SHA512_256 hash function.
This commitment is computed based on the PaysetCommit type specified in the block's consensus protocol.
"""
transactions_root_sha256: str
"""
TransactionsRootSHA256 is an auxiliary TransactionRoot, built using a vector commitment instead of a merkle tree, and SHA256 hash function instead of the default SHA512_256. This commitment can be used on environments where only the SHA256 function exists.
"""
upgrade_state: NotRequired['BlockUpgradeState']
"""Fields relating to a protocol upgrade."""
```
## SubscribedTransaction
[Section titled “SubscribedTransaction”](#subscribedtransaction)
The common model used to expose a transaction that is returned from a subscription is a `SubscribedTransaction`, which can be imported like so:
```python
from algokit_subscriber import SubscribedTransaction
```
This type is substantively, based on the Indexer [`TransactionResult`](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/src/types/indexer.ts#L77) [model](https://dev.algorand.co/reference/rest-apis/indexer#transaction) format. While the indexer type is used, the subscriber itself doesn’t have to use indexer - any transactions it retrieves from algod are transformed to this common model type. Beyond the indexer type it has some modifications to:
* Add the `parent_transaction_id` field so inner transactions have a reference to their parent
* Override the type of `inner-txns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too
* Add emitted ARC-28 events via `arc28_events`
* The list of filter(s) that caused the transaction to be matched
The definition of the type is:
```python
TransactionResult = TypedDict("TransactionResult", {
"id": str,
"tx-type": str,
"fee": int,
"sender": str,
"first-valid": int,
"last-valid": int,
"confirmed-round": NotRequired[int],
"group": NotRequired[None | str],
"note": NotRequired[str],
"logs": NotRequired[list[str]],
"round-time": NotRequired[int],
"intra-round-offset": NotRequired[int],
"signature": NotRequired['TransactionSignature'],
"application-transaction": NotRequired['ApplicationTransactionResult'],
"created-application-index": NotRequired[None | int],
"asset-config-transaction": NotRequired['AssetConfigTransactionResult'],
"created-asset-index": NotRequired[None | int],
"asset-freeze-transaction": NotRequired['AssetFreezeTransactionResult'],
"asset-transfer-transaction": NotRequired['AssetTransferTransactionResult'],
"keyreg-transaction": NotRequired['KeyRegistrationTransactionResult'],
"payment-transaction": NotRequired['PaymentTransactionResult'],
"state-proof-transaction": NotRequired['StateProofTransactionResult'],
"auth-addr": NotRequired[None | str],
"closing-amount": NotRequired[None | int],
"genesis-hash": NotRequired[str],
"genesis-id": NotRequired[str],
"inner-txns": NotRequired[list['TransactionResult']],
"rekey-to": NotRequired[str],
"lease": NotRequired[str],
"local-state-delta": NotRequired[list[dict]],
"global-state-delta": NotRequired[list[dict]],
"receiver-rewards": NotRequired[int],
"sender-rewards": NotRequired[int],
"close-rewards": NotRequired[int]
})
class SubscribedTransaction(TransactionResult):
"""
The common model used to expose a transaction that is returned from a subscription.
Substantively, based on the Indexer `TransactionResult` model format with some modifications to:
* Add the `parent_transaction_id` field so inner transactions have a reference to their parent
* Override the type of `inner_txns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too
* Add emitted ARC-28 events via `arc28_events`
* Balance changes in algo or assets
"""
parent_transaction_id: NotRequired[None | str]
"""The transaction ID of the parent of this transaction (if it's an inner transaction)."""
inner_txns: NotRequired[list['SubscribedTransaction']]
"""Inner transactions produced by application execution."""
arc28_events: NotRequired[list[EmittedArc28Event]]
"""Any ARC-28 events emitted from an app call."""
filters_matched: NotRequired[list[str]]
"""The names of any filters that matched the given transaction to result in it being 'subscribed'."""
balance_changes: NotRequired[list['BalanceChange']]
"""The balance changes in the transaction."""
class BalanceChange(TypedDict):
"""Represents a balance change effect for a transaction."""
address: str
"""The address that the balance change is for."""
asset_id: int
"""The asset ID of the balance change, or 0 for Algos."""
amount: int
"""The amount of the balance change in smallest divisible unit or microAlgos."""
roles: list['BalanceChangeRole']
"""The roles the account was playing that led to the balance change"""
class Arc28EventToProcess(TypedDict):
"""
Represents an ARC-28 event to be processed.
"""
group_name: str
"""The name of the ARC-28 event group the event belongs to"""
event_name: str
"""The name of the ARC-28 event that was triggered"""
event_signature: str
"""The signature of the event e.g. `EventName(type1,type2)`"""
event_prefix: str
"""The 4-byte hex prefix for the event"""
event_definition: Arc28Event
"""The ARC-28 definition of the event"""
class EmittedArc28Event(Arc28EventToProcess):
"""
Represents an ARC-28 event that was emitted.
"""
args: list[Any]
"""The ordered arguments extracted from the event that was emitted"""
args_by_name: dict[str, Any]
"""The named arguments extracted from the event that was emitted (where the arguments had a name defined)"""
```
## Examples
[Section titled “Examples”](#examples)
Here are some examples of how to use this method:
### Real-time notification of transactions of interest at the tip of the chain discarding stale records
[Section titled “Real-time notification of transactions of interest at the tip of the chain discarding stale records”](#real-time-notification-of-transactions-of-interest-at-the-tip-of-the-chain-discarding-stale-records)
If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would drop old records and restart notifications from the new tip.
```python
from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction
from algokit_utils.beta.algorand_client import AlgorandClient
algorand = AlgorandClient.test_net()
watermark = 0
def get_watermark() -> int:
return watermark
def set_watermark(new_watermark: int) -> None:
global watermark # noqa: PLW0603
watermark = new_watermark
subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, config={
'filters': [
{
'name': 'filter1',
'filter': {
'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU'
}
}
],
'wait_for_block_when_at_tip': True,
'watermark_persistence': {
'get': get_watermark,
'set': set_watermark
},
'sync_behaviour': 'skip-sync-newest',
'max_rounds_to_sync': 100
})
def notify_transactions(transaction: SubscribedTransaction, _: str) -> None:
# Implement your notification logic here
print(f"New transaction from {transaction['sender']}") # noqa: T201
subscriber.on('filter1', notify_transactions)
subscriber.start()
```
### Real-time notification of transactions of interest at the tip of the chain with at least once delivery
[Section titled “Real-time notification of transactions of interest at the tip of the chain with at least once delivery”](#real-time-notification-of-transactions-of-interest-at-the-tip-of-the-chain-with-at-least-once-delivery)
If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would pick up where it left off and catch up using algod (note: you need to connect it to a archival node).
```python
from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction
from algokit_utils.beta.algorand_client import AlgorandClient
algorand = AlgorandClient.test_net()
watermark = 0
def get_watermark() -> int:
return watermark
def set_watermark(new_watermark: int) -> None:
global watermark # noqa: PLW0603
watermark = new_watermark
subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, config={
'filters': [
{
'name': 'filter1',
'filter': {
'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU'
}
}
],
'wait_for_block_when_at_tip': True,
'watermark_persistence': {
'get': get_watermark,
'set': set_watermark
},
'sync_behaviour': 'sync-oldest-start-now',
'max_rounds_to_sync': 100
})
def notify_transactions(transaction: SubscribedTransaction, _: str) -> None:
# Implement your notification logic here
print(f"New transaction from {transaction['sender']}") # noqa: T201
subscriber.on('filter1', notify_transactions)
subscriber.start()
```
### Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain
[Section titled “Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain”](#quickly-building-a-reliable-up-to-date-cache-index-of-all-transactions-of-interest-from-the-beginning-of-the-chain)
If you ran the following code on a cron schedule of (say) every 30 - 60 seconds it would create a cached index of all assets created by the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`). Given it uses indexer to catch up you can deploy this into a fresh environment with an empty database and it will catch up in seconds rather than days.
```python
from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction
from algokit_utils.beta.algorand_client import AlgorandClient
algorand = AlgorandClient.test_net()
watermark = 0
def get_watermark() -> int:
return watermark
def set_watermark(new_watermark: int) -> None:
global watermark # noqa: PLW0603
watermark = new_watermark
def save_transactions(transactions: list[SubscribedTransaction]) -> None:
# Implement your logic to save transactions here
pass
subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, indexer_client=algorand.client.indexer, config={
'filters': [
{
'name': 'filter1',
'filter': {
'type': 'acfg',
'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU',
'asset_create': True
}
}
],
'wait_for_block_when_at_tip': True,
'watermark_persistence': {
'get': get_watermark,
'set': set_watermark
},
'sync_behaviour': 'catchup-with-indexer',
'max_rounds_to_sync': 1000
})
def process_transactions(transaction: SubscribedTransaction, _: str) -> None:
save_transactions([transaction])
subscriber.on('filter1', process_transactions)
subscriber.start()
```
# Algorand transaction subscription / indexing
## Quick start
[Section titled “Quick start”](#quick-start)
```typescript
// Create subscriber or something
const subscriber = new AlgorandSubscriber(
{
filters: [
{
name: 'filter1',
filter: {
type: TransactionType.pay,
sender: 'ABC...',
},
},
],
/* ... other options (use intellisense to explore) */
},
algod,
optionalIndexer,
);
// Set up subscription(s)
subscriber.on('filter1', async transaction => {
// ...
});
//...
// Set up error handling
subscriber.onError(e => {
// ...
});
// Either: Start the subscriber (if in long-running process)
subscriber.start();
// OR: Poll the subscriber (if in cron job / periodic lambda)
subscriber.pollOnce();
```
## Capabilities
[Section titled “Capabilities”](#capabilities)
* [Algorand transaction subscription / indexing](#algorand-transaction-subscription--indexing)
* [Quick start](#quick-start)
* [Capabilities](#capabilities)
* [Notification *and* indexing](#notification-and-indexing)
* [Low latency processing](#low-latency-processing)
* [Watermarking and resilience](#watermarking-and-resilience)
* [Extensive subscription filtering](#extensive-subscription-filtering)
* [ARC-28 event subscription and reads](#arc-28-event-subscription-and-reads)
* [First-class inner transaction support](#first-class-inner-transaction-support)
* [State-proof support](#state-proof-support)
* [Simple programming model](#simple-programming-model)
* [Easy to deploy](#easy-to-deploy)
* [Fast initial index](#fast-initial-index)
* [Entry points](#entry-points)
* [Reference docs](#reference-docs)
* [Emit ARC-28 events](#emit-arc-28-events)
* [Algorand Python](#algorand-python)
* [TealScript](#tealscript)
* [PyTEAL](#pyteal)
* [TEAL](#teal)
### Notification *and* indexing
[Section titled “Notification and indexing”](#notification-and-indexing)
This library supports the ability to stay at the tip of the chain and power notification / alerting type scenarios through the use of the `syncBehaviour` parameter in both [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/) and [`getSubscribedTransactions`](/algokit/subscriber/typescript/subscriptions/). For example to stay at the tip of the chain for notification/alerting scenarios you could do:
```typescript
const subscriber = new AlgorandSubscriber({syncBehaviour: 'skip-sync-newest', maxRoundsToSync: 100, ...}, ...)
// or:
getSubscribedTransactions({syncBehaviour: 'skip-sync-newest', maxRoundsToSync: 100, ...}, ...)
```
The `currentRound` parameter (availble when calling `getSubscribedTransactions`) can be used to set the tip of the chain. If not specified, the tip will be automatically detected. Whilst this is generally not needed, it is useful in scenarios where the tip is being detected as part of another process and you only want to sync to that point and no further.
The `maxRoundsToSync` parameter controls how many rounds it will process when first starting when it’s not caught up to the tip of the chain. While it’s caught up to the chain it will keep processing as many rounds as are available from the last round it processed to when it next tries to sync (see below for how to control that).
If you expect your service will resiliently always stay running, should never get more than `maxRoundsToSync` from the tip of the chain, there is a problem if it processes old records and you’d prefer it throws an error when losing track of the tip of the chain rather than continue or skip to newest you can set the `syncBehaviour` parameter to `fail`.
The `syncBehaviour` parameter can also be set to `sync-oldest-start-now` if you want to process all transactions once you start alerting/notifying. This requires that your service needs to keep running otherwise it could fall behind and start processing old records / take a while to catch back up with the tip of the chain. This is also a useful setting if you are creating an indexer that only needs to process from the moment the indexer is deployed rather than from the beginning of the chain. Note: this requires the [initial watermark](#watermarking-and-resilience) to start at 0 to work.
The `syncBehaviour` parameter can also be set to `sync-oldest`, which is a more traditional indexing scenario where you want to process every single block from the beginning of the chain. This can take a long time to process by default (e.g. days), noting there is a [fast catchup feature](#fast-initial-index). If you don’t want to start from the beginning of the chain you can [set the initial watermark](#watermarking-and-resilience) to a higher round number than 0 to start indexing from that point.
### Low latency processing
[Section titled “Low latency processing”](#low-latency-processing)
You can control the polling semantics of the library when using the [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/) by either specifying the `frequencyInSeconds` parameter to control the duration between polls or you can use the `waitForBlockWhenAtTip` parameter to indicate the subscriber should [call algod to ask it to inform the subscriber when a new round is available](https://dev.algorand.co/reference/rest-apis/algod/#waitforblock) so the subscriber can immediately process that round with a much lower-latency. When this mode is set, the subscriber intelligently uses this option only when it’s caught up to the tip of the chain, but otherwise uses `frequencyInSeconds` while catching up to the tip of the chain.
e.g.
```typescript
// When catching up to tip of chain will pool every 1s for the next 1000 blocks, but when caught up will poll algod for a new block so it can be processed immediately with low latency
const subscriber = new AlgorandSubscriber({frequencyInSeconds: 1, waitForBlockWhenAtTip: true, maxRoundsToSync: 1000, ...}, ...)
...
subscriber.start()
```
If you are using [`getSubscribedTransactions`](/algokit/subscriber/typescript/subscriptions/) or the `pollOnce` method on `AlgorandSubscriber` then you can use your infrastructure and/or surrounding orchestration code to take control of the polling duration.
If you want to manually run code that waits for a given round to become available you can execute the following algosdk code:
```typescript
await algod.statusAfterBlock(roundNumberToWaitFor).do();
```
It’s worth noting special care has been placed in the subscriber library to properly handle abort signalling. All asynchronous operations including algod polls and polling waits have abort signal handling in place so if you call `subscriber.stop()` at any point in time it should almost immediately, cleanly, exit and if you want to wait for the stop to finish you can `await subscriber.stop()`.
If you want to hook this up to Node.js process signals you can include code like this in your service entrypoint:
```typescript
['SIGINT', 'SIGTERM', 'SIGQUIT'].forEach(signal =>
process.on(signal, () => {
// eslint-disable-next-line no-console
console.log(`Received ${signal}; stopping subscriber...`);
subscriber.stop(signal);
}),
);
```
### Watermarking and resilience
[Section titled “Watermarking and resilience”](#watermarking-and-resilience)
You can create reliable syncing / indexing services through a simple round watermarking capability that allows you to create resilient syncing services that can recover from an outage.
This works through the use of the `watermarkPersistence` parameter in [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/) and `watermark` parameter in [`getSubscribedTransactions`](/algokit/subscriber/typescript/subscriptions/):
```typescript
async function getSavedWatermark(): Promise {
// Return the watermark from a persistence store e.g. database, redis, file system, etc.
}
async function saveWatermark(newWatermark: bigint): Promise {
// Save the watermark to a persistence store e.g. database, redis, file system, etc.
}
...
const subscriber = new AlgorandSubscriber({watermarkPersistence: {
get: getSavedWatermark, set: saveWatermark
}, ...}, ...)
// or:
const watermark = await getSavedWatermark()
const result = await getSubscribedTransactions({watermark, ...}, ...)
await saveWatermark(result.newWatermark)
```
By using a persistence store, you can gracefully respond to an outage of your subscriber. The next time it starts it will pick back up from the point where it last persisted. It’s worth noting this provides at least once delivery semantics so you need to handle duplicate events.
Alternatively, if you want to create at most once delivery semantics you could use the [transactional outbox pattern](https://microservices.io/patterns/data/transactional-outbox.html) and wrap a unit of work from a ACID persistence store (e.g. a SQL database with a serializable or repeatable read transaction) around the watermark retrieval, transaction processing and watermark persistence so the processing of transactions and watermarking of a single poll happens in a single atomic transaction. In this model, you would then process the transactions in a separate process from the persistence store (and likely have a flag on each transaction to indicate if it has been processed or not). You would need to be careful to ensure that you only have one subscriber actively running at a time to guarantee this delivery semantic. To ensure resilience you may want to have multiple subscribers running, but a primary node that actually executes based on retrieval of a distributed semaphore / lease.
If you are doing a quick test or creating an ephemeral subscriber that just needs to exist in-memory and doesn’t need to recover resiliently (useful with `syncBehaviour` of `skip-sync-newest` for instance) then you can use an in-memory variable instead of a persistence store, e.g.:
```typescript
let watermark = 0
const subscriber = new AlgorandSubscriber({watermarkPersistence: {
get: () => watermark, set: (newWatermark: bigint) => watermark = newWatermark
}, ...}, ...)
// or:
let watermark = 0
const result = await getSubscribedTransactions({watermark, ...}, ...)
watermark = result.newWatermark
```
### Extensive subscription filtering
[Section titled “Extensive subscription filtering”](#extensive-subscription-filtering)
This library has extensive filtering options available to you so you can have fine-grained control over which transactions you are interested in.
There is a core type that is used to specify the filters [`TransactionFilter`](/algokit/subscriber/typescript/subscriptions/#transactionfilter):
```typescript
const subscriber = new AlgorandSubscriber({filters: [{name: 'filterName', filter: {/* Filter properties */}}], ...}, ...)
// or:
getSubscribedTransactions({filters: [{name: 'filterName', filter: {/* Filter properties */}}], ... }, ...)
```
Currently this allows you filter based on any combination (AND logic) of:
* Transaction type e.g. `filter: { type: TransactionType.axfer }` or `filter: {type: [TransactionType.axfer, TransactionType.pay] }`
* Account (sender and receiver) e.g. `filter: { sender: "ABCDE..F" }` or `filter: { sender: ["ABCDE..F", "ZYXWV..A"] }` and `filter: { receiver: "12345..6" }` or `filter: { receiver: ["ABCDE..F", "ZYXWV..A"] }`
* Note prefix e.g. `filter: { notePrefix: "xyz" }`
* Apps
* ID e.g. `filter: { appId: 54321 }` or `filter: { appId: [54321, 12345] }`
* Creation e.g. `filter: { appCreate: true }`
* Call on-complete(s) e.g. `filter: { appOnComplete: ApplicationOnComplete.optin }` or `filter: { appOnComplete: [ApplicationOnComplete.optin, ApplicationOnComplete.noop] }`
* ARC4 method signature(s) e.g. `filter: { methodSignature: "MyMethod(uint64,string)" }` or `filter: { methodSignature: ["MyMethod(uint64,string)uint64", "MyMethod2(unit64)"] }`
* Call arguments e.g.
```typescript
filter: {
appCallArgumentsMatch: appCallArguments =>
appCallArguments.length > 1 &&
Buffer.from(appCallArguments[1]).toString('utf-8') === 'hello_world';
}
```
* Emitted ARC-28 event(s) e.g.
```typescript
filter: {
arc28Events: [{ groupName: 'group1', eventName: 'MyEvent' }];
}
```
Note: For this to work you need to [specify ARC-28 events in the subscription config](#arc-28-event-subscription-and-reads).
* Assets
* ID e.g. `filter: { assetId: 123456n }` or `filter: { assetId: [123456n, 456789n] }`
* Creation e.g. `filter: { assetCreate: true }`
* Amount transferred (min and/or max) e.g. `filter: { type: TransactionType.axfer, minAmount: 1, maxAmount: 100 }`
* Balance changes (asset ID, sender, receiver, close to, min and/or max change) e.g. `filter: { balanceChanges: [{assetId: [15345n, 36234n], roles: [BalanceChangeRole.sender], address: "ABC...", minAmount: 1, maxAmount: 2}]}`
* Algo transfers (pay transactions)
* Amount transferred (min and/or max) e.g. `filter: { type: TransactionType.pay, minAmount: 1, maxAmount: 100 }`
* Balance changes (sender, receiver, close to, min and/or max change) e.g. `filter: { balanceChanges: [{roles: [BalanceChangeRole.sender], address: "ABC...", minAmount: 1, maxAmount: 2}]}`
You can supply multiple, named filters via the [`NamedTransactionFilter`](/algokit/subscriber/typescript/subscriptions/#namedtransactionfilter) type. When subscribed transactions are returned each transaction will have a `filtersMatched` property that will have an array of any filter(s) that caused that transaction to be returned. When using [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/), you can subscribe to events that are emitted with the filter name.
### ARC-28 event subscription and reads
[Section titled “ARC-28 event subscription and reads”](#arc-28-event-subscription-and-reads)
You can [subscribe to ARC-28 events](#extensive-subscription-filtering) for a smart contract, similar to how you can [subscribe to events in Ethereum](https://docs.web3js.org/guides/events_subscriptions/).
Furthermore, you can receive any ARC-28 events that a smart contract call you subscribe to emitted in the [subscribed transaction object](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction).
Both subscription and receiving ARC-28 events work through the use of the `arc28Events` parameter in [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/) and [`getSubscribedTransactions`](/algokit/subscriber/typescript/subscriptions/):
```typescript
const group1Events: Arc28EventGroup = {
groupName: 'group1',
events: [
{
name: 'MyEvent',
args: [
{type: 'uint64'},
{type: 'string'},
]
}
]
}
const subscriber = new AlgorandSubscriber({arc28Events: [group1Events], ...}, ...)
// or:
const result = await getSubscribedTransactions({arc28Events: [group1Events], ...}, ...)
```
The `Arc28EventGroup` type has the following definition:
```typescript
/** Specifies a group of ARC-28 event definitions along with instructions for when to attempt to process the events. */
export interface Arc28EventGroup {
/** The name to designate for this group of events. */
groupName: string;
/** Optional list of app IDs that this event should apply to */
processForAppIds?: bigint[];
/** Optional predicate to indicate if these ARC-28 events should be processed for the given transaction */
processTransaction?: (transaction: TransactionResult) => boolean;
/** Whether or not to silently (with warning log) continue if an error is encountered processing the ARC-28 event data; default = false */
continueOnError?: boolean;
/** The list of ARC-28 event definitions */
events: Arc28Event[];
}
/**
* The definition of metadata for an ARC-28 event per https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0028.md#event.
*/
export interface Arc28Event {
/** The name of the event */
name: string;
/** Optional, user-friendly description for the event */
desc?: string;
/** The arguments of the event, in order */
args: Array<{
/** The type of the argument */
type: string;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
}>;
}
```
Each group allows you to apply logic to the applicability and processing of a set of events. This structure allows you to safely process the events from multiple contracts in the same subscriber, or perform more advanced filtering logic to event processing.
When specifying an [ARC-28 event filter](#extensive-subscription-filtering), you specify both the `groupName` and `eventName`(s) to narrow down what event(s) you want to subscribe to.
If you want to emit an ARC-28 event from your smart contract you can follow the [below code examples](#emit-arc-28-events).
### First-class inner transaction support
[Section titled “First-class inner transaction support”](#first-class-inner-transaction-support)
When you subscribe to transactions any subscriptions that cover an inner transaction will pick up that inner transaction and [return](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) it to you correctly.
Note: the behaviour Algorand Indexer has is to return the parent transaction, not the inner transaction; this library will always return the actual transaction you subscribed to.
If you [receive](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) an inner transaction then there will be a `parentTransactionId` field populated that allows you to see that it was an inner transaction and how to identify the parent transaction.
The `id` of an inner transaction will be set to `{parentTransactionId}/inner/{index-of-child-within-parent}` where `{index-of-child-within-parent}` is calculated based on uniquely walking the tree of potentially nested inner transactions. [This transaction in Allo.info](https://allo.info/tx/group/cHiEEvBCRGnUhz9409gHl%2Fvn00lYDZnJoppC3YexRr0%3D) is a good illustration of how inner transaction indexes are allocated (this library uses the same approach).
All [returned](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) transactions will have an `inner-txns` property with any inner transactions of that transaction populated (recursively).
The `intra-round-offset` field in a [subscribed transaction or inner transaction within](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) is calculated by walking the full tree depth-first from the first transaction in the block, through any inner transactions recursively starting from an index of 0. This algorithm matches the one in Algorand Indexer and ensures that all transactions have a unique index, but the top level transaction in the block don’t necessarily have a sequential index.
### State-proof support
[Section titled “State-proof support”](#state-proof-support)
You can subscribe to [state proof](https://dev.algorand.co/concepts/protocol/stateproofs) transactions using this subscriber library. At the time of writing state proof transactions are not supported by algosdk v2 and custom handling has been added to ensure this valuable type of transaction can be subscribed to.
The field level documentation of the [returned state proof transaction](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) is comprehensively documented via [AlgoKit Utils](https://github.com/algorandfoundation/algokit-utils-ts/blob/main/src/types/indexer.ts#L277).
By exposing this functionality, this library can be used to create a [light client](https://dev.algorand.co/concepts/protocol/stateproofs).
### Simple programming model
[Section titled “Simple programming model”](#simple-programming-model)
This library is easy to use and consume through [easy to use, type-safe TypeScript methods and objects](#entry-points) and subscribed transactions have a [comprehensive and familiar model type](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction) with all relevant/useful information about that transaction (including things like transaction id, round number, created asset/app id, app logs, etc.) modelled on the indexer data model (which is used regardless of whether the transactions come from indexer or algod so it’s a consistent experience).
Furthermore, the `AlgorandSubscriber` class has a familiar programming model based on the [Node.js EventEmitter](https://nodejs.org/en/learn/asynchronous-work/the-nodejs-event-emitter), but with async methods.
For more examples of how to use it see the [relevant documentation](/algokit/subscriber/typescript/subscriber/).
### Easy to deploy
[Section titled “Easy to deploy”](#easy-to-deploy)
Because the [entry points](#entry-points) of this library are simple TypeScript methods to execute it you simply need to run it in a valid JavaScript execution environment. For instance, you could run it within a web browser if you want a user facing app to show real-time transaction notifications in-app, or in a Node.js process running in the myriad of ways Node.js can be run.
Because of that, you have full control over how you want to deploy and use the subscriber; it will work with whatever persistence (e.g. sql, no-sql, etc.), queuing/messaging (e.g. queues, topics, buses, web hooks, web sockets) and compute (e.g. serverless periodic lambdas, continually running containers, virtual machines, etc.) services you want to use.
### Fast initial index
[Section titled “Fast initial index”](#fast-initial-index)
When [subscribing to the chain](#notification-and-indexing) for the purposes of building an index you often will want to start at the beginning of the chain or a substantial time in the past when the given solution you are subscribing for started.
This kind of catch up takes days to process since algod only lets you retrieve a single block at a time and retrieving a block takes 0.5-1s. Given there are millions of blocks in MainNet it doesn’t take long to do the math to see why it takes so long to catch up.
This subscriber library has a unique, optional indexer catch up mode that allows you to use indexer to catch up to the tip of the chain in seconds or minutes rather than days for your specific filter.
This is really handy when you are doing local development or spinning up a new environment and don’t want to wait for days.
To make use of this feature, you need to set the `syncBehaviour` config to `catchup-with-indexer` and ensure that you pass `indexer` in to the [entry point](#entry-points) along with `algod`.
Any [filter](#extensive-subscription-filtering) you apply will be seamlessly translated to indexer searches to get the historic transactions in the most efficient way possible based on the apis indexer exposes. Once the subscriber is within `maxRoundsToSync` of the tip of the chain it will switch to subscribing using `algod`.
To see this in action, you can run the Data History Museum example in this repository against MainNet and see it sync millions of rounds in seconds.
The indexer catchup isn’t magic - if the filter you are trying to catch up with generates an enormous number of transactions (e.g. hundreds of thousands or millions) then it will run very slowly and has the potential for running out of compute and memory time depending on what the constraints are in the deployment environment you are running in. In that instance though, there is a config parameter you can use `maxIndexerRoundsToSync` so you can break the indexer catchup into multiple “polls” e.g. 100,000 rounds at a time. This allows a smaller batch of transactions to be retrieved and persisted in multiple batches.
To understand how the indexer behaviour works to know if you are likely to generate a lot of transactions it’s worth understanding the architecture of the indexer catchup; indexer catchup runs in two stages:
1. **Pre-filtering**: Any filters that can be translated to the [indexer search transactions endpoint](https://dev.algorand.co/reference/rest-apis/indexer#transaction). This query is then run between the rounds that need to be synced and paginated in the max number of results (1000) at a time until all of the transactions are retrieved. This ensures we get round-based transactional consistency. This is the filter that can easily explode out though and take a long time when using indexer catchup. For avoidance of doubt, the following filters are the ones that are converted to a pre-filter:
* `sender` (single value)
* `receiver` (single value)
* `type` (single value)
* `notePrefix`
* `appId` (single value)
* `assetId` (single value)
* `minAmount` (and `type = pay` or `assetId` provided)
* `maxAmount` (and `maxAmount < Number.MAX_SAFE_INTEGER` and `type = pay` or (`assetId` provided and `minAmount > 0`))
2. **Post-filtering**: All remaining filters are then applied in-memory to the resulting list of transactions that are returned from the pre-filter before being returned as subscribed transactions.
## Entry points
[Section titled “Entry points”](#entry-points)
There are two entry points into the subscriber functionality. The lower level [`getSubscribedTransactions`](/algokit/subscriber/typescript/subscriptions/) method that contains the raw subscription logic for a single “poll”, and the [`AlgorandSubscriber`](/algokit/subscriber/typescript/subscriber/) class that provides a higher level interface that is easier to use and takes care of a lot more orchestration logic for you (particularly around the ability to continuously poll).
Both are first-class supported ways of using this library, but we generally recommend starting with the `AlgorandSubscriber` since it’s easier to use and will cover the majority of use cases.
## Reference docs
[Section titled “Reference docs”](#reference-docs)
[See reference docs](latest/guides/code/README).
## Emit ARC-28 events
[Section titled “Emit ARC-28 events”](#emit-arc-28-events)
To emit ARC-28 events from your smart contract you can use the following syntax.
### Algorand Python
[Section titled “Algorand Python”](#algorand-python)
```python
@arc4.abimethod
def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None:
arc4.emit("MyEvent", a, b)
```
OR:
```python
class MyEvent(arc4.Struct):
a: arc4.String
b: arc4.UInt64
# ...
@arc4.abimethod
def emit_swapped(self, a: arc4.String, b: arc4.UInt64) -> None:
arc4.emit(MyEvent(a, b))
```
### TealScript
[Section titled “TealScript”](#tealscript)
```typescript
MyEvent = new EventLogger<{
stringField: string
intField: uint64
}>();
// ...
this.MyEvent.log({
stringField: "a"
intField: 2
})
```
### PyTEAL
[Section titled “PyTEAL”](#pyteal)
```python
class MyEvent(pt.abi.NamedTuple):
stringField: pt.abi.Field[pt.abi.String]
intField: pt.abi.Field[pt.abi.Uint64]
# ...
@app.external()
def myMethod(a: pt.abi.String, b: pt.abi.Uint64) -> pt.Expr:
# ...
return pt.Seq(
# ...
(event := MyEvent()).set(a, b),
pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), event._stored_value.load())),
pt.Approve(),
)
```
Note: if your event doesn’t have any dynamic ARC-4 types in it then you can simplify that to something like this:
```python
pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), a.get(), pt.Itob(b.get()))),
```
### TEAL
[Section titled “TEAL”](#teal)
```teal
method "MyEvent(byte[],uint64)"
frame_dig 0 // or any other command to put the ARC-4 encoded bytes for the event on the stack
concat
log
```
# `AlgorandSubscriber`
`AlgorandSubscriber` is a class that allows you to easily subscribe to the Algorand Blockchain, define a series of events that you are interested in, and react to those events. It has a similar programming model to [EventEmitter](https://nodejs.org/docs/latest/api/events.html), but also supports async/await.
## Creating a subscriber
[Section titled “Creating a subscriber”](#creating-a-subscriber)
To create an `AlgorandSubscriber` you can this cool-ass constructor:
```typescript
/**
* Create a new `AlgorandSubscriber`.
* @param config The subscriber configuration
* @param algod An algod client
* @param indexer An (optional) indexer ; only needed if `subscription.syncBehaviour` is `catchup-with-indexer`
*/
constructor(config: AlgorandSubscriberConfig, algod: Algodv2, indexer?: Indexer)
```
The key configuration is the `AlgorandSubscriberConfig` interface:
````typescript
/** Configuration for an `AlgorandSubscriber`. */
export interface AlgorandSubscriberConfig extends CoreTransactionSubscriptionParams {
/** The set of filters to subscribe to / emit events for, along with optional data mappers. */
filters: SubscriberConfigFilter[];
/** The frequency to poll for new blocks in seconds; defaults to 1s */
frequencyInSeconds?: number;
/** Whether to wait via algod `/status/wait-for-block-after` endpoint when at the tip of the chain; reduces latency of subscription */
waitForBlockWhenAtTip?: boolean;
/** Methods to retrieve and persist the current watermark so syncing is resilient and maintains
* its position in the chain */
watermarkPersistence: {
/** Returns the current watermark that syncing has previously been processed to */
get: () => Promise;
/** Persist the new watermark that has been processed */
set: (newWatermark: bigint) => Promise;
};
}
/** Common parameters to control a single subscription pull/poll for both `AlgorandSubscriber` and `getSubscribedTransactions`. */
export interface CoreTransactionSubscriptionParams {
/** The filter(s) to apply to find transactions of interest.
* A list of filters with corresponding names.
*
* @example
* ```typescript
* filter: [{
* name: 'asset-transfers',
* filter: {
* type: TransactionType.axfer,
* //...
* }
* }, {
* name: 'payments',
* filter: {
* type: TransactionType.pay,
* //...
* }
* }]
* ```
*
*/
filters: NamedTransactionFilter[];
/** Any ARC-28 event definitions to process from app call logs */
arc28Events?: Arc28EventGroup[];
/** The maximum number of rounds to sync from algod for each subscription pull/poll.
*
* Defaults to 500.
*
* This gives you control over how many rounds you wait for at a time,
* your staleness tolerance when using `skip-sync-newest` or `fail`, and
* your catchup speed when using `sync-oldest`.
**/
maxRoundsToSync?: number;
/**
* The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'.
*
* By default there is no limit and it will paginate through all of the rounds.
* Sometimes this can result in an incredibly long catchup time that may break the service
* due to execution and memory constraints, particularly for filters that result in a large number of transactions.
*
* Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent
* boundary based on the number of rounds specified here.
*/
maxIndexerRoundsToSync?: number;
/** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync`
* past `watermark` then how should that be handled:
* * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful
* for real-time notification scenarios where you don't care about history and
* are happy to lose old transactions.
* * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds
* using algod; note: this will be slow if you are starting from 0 and requires
* an archival node.
* * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0`
* then start at the current round i.e. don't sync historical records, but once
* subscribing starts sync everything; note: if it falls behind it requires an
* archival node.
* * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1`
* using indexer (much faster than using algod for long time periods) and then
* use algod from there.
* * `fail`: Throw an error.
**/
syncBehaviour:
| 'skip-sync-newest'
| 'sync-oldest'
| 'sync-oldest-start-now'
| 'catchup-with-indexer'
| 'fail';
}
````
`watermarkPersistence` allows you to ensure reliability against your code having outages since you can persist the last block your code processed up to and then provide it again the next time your code runs.
`maxRoundsToSync` and `syncBehaviour` allow you to control the subscription semantics as your code falls behind the tip of the chain (either on first run or after an outage).
`frequencyInSeconds` allows you to control the polling frequency and by association your latency tolerance for new events once you’ve caught up to the tip of the chain. Alternatively, you can set `waitForBlockWhenAtTip` to get the subscriber to ask algod to tell it when there is a new block ready to reduce latency when it’s caught up to the tip of the chain.
`arc28Events` are any [ARC-28 event definitions](/algokit/subscriber/typescript/subscriptions/#arc-28-events).
Filters defines the different subscription(s) you want to make, and is defined by the following interface:
```typescript
/** A single event to subscribe to / emit. */
export interface SubscriberConfigFilter extends NamedTransactionFilter {
/** An optional data mapper if you want the event data to take a certain shape when subscribing to events with this filter name.
*
* If not specified, then the event will simply receive a `SubscribedTransaction`.
*
* Note: if you provide multiple filters with the same name then only the mapper of the first matching filter will be used
*/
mapper?: (transaction: SubscribedTransaction[]) => Promise;
}
/** Specify a named filter to apply to find transactions of interest. */
export interface NamedTransactionFilter {
/** The name to give the filter. */
name: string;
/** The filter itself. */
filter: TransactionFilter;
}
```
The event name is a unique name that describes the event you are subscribing to. The [filter](/algokit/subscriber/typescript/subscriptions/#transactionfilter) defines how to interpret transactions on the chain as being “collected” by that event and the mapper is an optional ability to map from the raw transaction to a more targeted type for your event subscribers to consume.
## Subscribing to events
[Section titled “Subscribing to events”](#subscribing-to-events)
Once you have created the `AlgorandSubscriber`, you can register handlers/listeners for the filters you have defined, or each poll as a whole batch.
You can do this via the `on`, `onBatch` and `onPoll` methods:
````typescript
/**
* Register an event handler to run on every subscribed transaction matching the given filter name.
*
* The listener can be async and it will be awaited if so.
* @example **Non-mapped**
* ```typescript
* subscriber.on('my-filter', async (transaction) => { console.log(transaction.id) })
* ```
* @example **Mapped**
* ```typescript
* new AlgorandSubscriber({filters: [{name: 'my-filter', filter: {...}, mapper: (t) => t.id}], ...}, algod)
* .on('my-filter', async (transactionId) => { console.log(transactionId) })
* ```
* @param filterName The name of the filter to subscribe to
* @param listener The listener function to invoke with the subscribed event
* @returns The subscriber so `on*` calls can be chained
*/
on(filterName: string, listener: TypedAsyncEventListener) {}
/**
* Register an event handler to run on all subscribed transactions matching the given filter name
* for each subscription poll.
*
* This is useful when you want to efficiently process / persist events
* in bulk rather than one-by-one.
*
* The listener can be async and it will be awaited if so.
* @example **Non-mapped**
* ```typescript
* subscriber.onBatch('my-filter', async (transactions) => { console.log(transactions.length) })
* ```
* @example **Mapped**
* ```typescript
* new AlgorandSubscriber({filters: [{name: 'my-filter', filter: {...}, mapper: (t) => t.id}], ...}, algod)
* .onBatch('my-filter', async (transactionIds) => { console.log(transactionIds) })
* ```
* @param filterName The name of the filter to subscribe to
* @param listener The listener function to invoke with the subscribed events
* @returns The subscriber so `on*` calls can be chained
*/
onBatch(filterName: string, listener: TypedAsyncEventListener) {}
/**
* Register an event handler to run before every subscription poll.
*
* This is useful when you want to do pre-poll logging or start a transaction etc.
*
* The listener can be async and it will be awaited if so.
* @example
* ```typescript
* subscriber.onBeforePoll(async (metadata) => { console.log(metadata.watermark) })
* ```
* @param listener The listener function to invoke with the pre-poll metadata
* @returns The subscriber so `on*` calls can be chained
*/
onBeforePoll(listener: TypedAsyncEventListener) {}
/**
* Register an event handler to run after every subscription poll.
*
* This is useful when you want to process all subscribed transactions
* in a transactionally consistent manner rather than piecemeal for each
* filter, or to have a hook that occurs at the end of each poll to commit
* transactions etc.
*
* The listener can be async and it will be awaited if so.
* @example
* ```typescript
* subscriber.onPoll(async (pollResult) => { console.log(pollResult.subscribedTransactions.length, pollResult.syncedRoundRange) })
* ```
* @param listener The listener function to invoke with the poll result
* @returns The subscriber so `on*` calls can be chained
*/
onPoll(listener: TypedAsyncEventListener) {}
````
The `TypedAsyncEventListener` type is defined as:
```typescript
type TypedAsyncEventListener = (event: T, eventName: string | symbol) => Promise | void;
```
This allows you to use async or sync event listeners.
When you define an event listener it will be called, one-by-one (and awaited) in the order the registrations occur.
If you call `onBatch` it will be called first, with the full set of transactions that were found in the current poll (0 or more). Following that, each transaction in turn will then be passed to the listener(s) that subscribed with `on` for that event.
The default type that will be received is a `SubscribedTransaction`, which can be imported like so:
```typescript
import type { SubscribedTransaction } from '@algorandfoundation/algokit-subscriber/types';
```
See the [detail about this type](/algokit/subscriber/typescript/subscriptions/#subscribedtransaction).
Alternatively, if you defined a mapper against the filter then it will be applied before passing the objects through.
If you call `onPoll` it will be called last (after all `on` and `onBatch` listeners) for each poll, with the full set of transactions for that poll and [metadata about the poll result](/algokit/subscriber/typescript/subscriptions/#transactionsubscriptionresult). This allows you to process the entire poll batch in one transaction or have a hook to call after processing individual listeners (e.g. to commit a transaction).
If you want to run code before a poll starts (e.g. to log or start a transaction) you can do so with `onBeforePoll`.
## Poll the chain
[Section titled “Poll the chain”](#poll-the-chain)
There are two methods to poll the chain for events: `pollOnce` and `start`:
```typescript
/**
* Execute a single subscription poll.
*
* This is useful when executing in the context of a process
* triggered by a recurring schedule / cron.
* @returns The poll result
*/
async pollOnce(): Promise {}
/**
* Start the subscriber in a loop until `stop` is called.
*
* This is useful when running in the context of a long-running process / container.
* @param inspect A function that is called for each poll so the inner workings can be inspected / logged / etc.
* @returns An object that contains a promise you can wait for after calling stop
*/
start(inspect?: (pollResult: TransactionSubscriptionResult) => void, suppressLog?: boolean): void {}
```
`pollOnce` is useful when you want to take control of scheduling the different polls, such as when running a Lambda on a schedule or a process via cron, etc. - it will do a single poll of the chain and return the result of that poll.
`start` is useful when you have a long-running process or container and you want it to loop infinitely at the specified polling frequency from the constructor config. If you want to inspect or log what happens under the covers you can pass in an `inspect` lambda that will be called for each poll.
If you use `start` then you can stop the polling by calling `stop`, which can be awaited to wait until everything is cleaned up. You may want to subscribe to Node.JS kill signals to exit cleanly:
```typescript
['SIGINT', 'SIGTERM', 'SIGQUIT'].forEach(signal =>
process.on(signal, () => {
// eslint-disable-next-line no-console
console.log(`Received ${signal}; stopping subscriber...`);
subscriber.stop(signal).then(() => console.log('Subscriber stopped'));
}),
);
```
## Handling errors
[Section titled “Handling errors”](#handling-errors)
Because `start` isn’t a blocking method, you can’t simply wrap it in a try/catch. To handle errors, you can register error handlers/listeners using the `onError` method. This works in a similar way to the other `on*` methods.
````typescript
/**
* Register an error handler to run if an error is thrown during processing or event handling.
*
* This is useful to handle any errors that occur and can be used to perform retries, logging or cleanup activities.
*
* The listener can be async and it will be awaited if so.
* @example
* ```typescript
* subscriber.onError((error) => { console.error(error) })
* ```
* @example
* ```typescript
* const maxRetries = 3
* let retryCount = 0
* subscriber.onError(async (error) => {
* retryCount++
* if (retryCount > maxRetries) {
* console.error(error)
* return
* }
* console.log(`Error occurred, retrying in 2 seconds (${retryCount}/${maxRetries})`)
* await new Promise((r) => setTimeout(r, 2_000))
* subscriber.start()
*})
* ```
* @param listener The listener function to invoke with the error that was thrown
* @returns The subscriber so `on*` calls can be chained
*/
onError(listener: ErrorListener) {}
````
The `ErrorListener` type is defined as:
```typescript
type ErrorListener = (error: unknown) => Promise | void;
```
This allows you to use async or sync error listeners.
Multiple error listeners can be added, and each will be called one-by-one (and awaited) in the order the registrations occur.
When no error listeners have been registered, a default listener is used to re-throw any exception, so they can be caught by global uncaught exception handlers. Once an error listener has been registered, the default listener is removed and it’s the responsibility of the registered error listener to perform any error handling.
## Examples
[Section titled “Examples”](#examples)
See the [main README](latest/README#examples).
# `getSubscribedTransactions`
`getSubscribedTransactions` is the core building block at the centre of this library. It’s a simple, but flexible mechanism that allows you to enact a single subscription “poll” of the Algorand blockchain.
This is a lower level building block, you likely don’t want to use it directly, but instead use the [`AlgorandSubscriber` class](/algokit/subscriber/typescript/subscriber/#creating-a-subscriber).
You can use this method to orchestrate everything from an index of all relevant data from the start of the chain through to simply subscribing to relevant transactions as they emerge at the tip of the chain. It allows you to have reliable at least once delivery even if your code has outages through the use of watermarking.
```typescript
/**
* Executes a single pull/poll to subscribe to transactions on the configured Algorand
* blockchain for the given subscription context.
* @param subscription The subscription context.
* @param algod An Algod client.
* @param indexer An optional indexer client, only needed when `onMaxRounds` is `catchup-with-indexer`.
* @returns The result of this subscription pull/poll.
*/
export async function getSubscribedTransactions(
subscription: TransactionSubscriptionParams,
algod: Algodv2,
indexer?: Indexer,
): Promise;
```
## TransactionSubscriptionParams
[Section titled “TransactionSubscriptionParams”](#transactionsubscriptionparams)
Specifying a subscription requires passing in a `TransactionSubscriptionParams` object, which configures the behaviour:
````typescript
/** Parameters to control a single subscription pull/poll. */
export interface TransactionSubscriptionParams {
/** The filter(s) to apply to find transactions of interest.
* A list of filters with corresponding names.
*
* @example
* ```typescript
* filter: [{
* name: 'asset-transfers',
* filter: {
* type: TransactionType.axfer,
* //...
* }
* }, {
* name: 'payments',
* filter: {
* type: TransactionType.pay,
* //...
* }
* }]
* ```
*
*/
filters: NamedTransactionFilter[];
/** Any ARC-28 event definitions to process from app call logs */
arc28Events?: Arc28EventGroup[];
/** The current round watermark that transactions have previously been synced to.
*
* Persist this value as you process transactions processed from this method
* to allow for resilient and incremental syncing.
*
* Syncing will start from `watermark + 1`.
*
* Start from 0 if you want to start from the beginning of time, noting that
* will be slow if `onMaxRounds` is `sync-oldest`.
**/
watermark: bigint;
/** The maximum number of rounds to sync for each subscription pull/poll.
*
* Defaults to 500.
*
* This gives you control over how many rounds you wait for at a time,
* your staleness tolerance when using `skip-sync-newest` or `fail`, and
* your catchup speed when using `sync-oldest`.
**/
maxRoundsToSync?: number;
/**
* The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'.
*
* By default there is no limit and it will paginate through all of the rounds.
* Sometimes this can result in an incredibly long catchup time that may break the service
* due to execution and memory constraints, particularly for filters that result in a large number of transactions.
*
* Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent
* boundary based on the number of rounds specified here.
*/
maxIndexerRoundsToSync?: number;
/** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync`
* past `watermark` then how should that be handled:
* * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful
* for real-time notification scenarios where you don't care about history and
* are happy to lose old transactions.
* * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds
* using algod; note: this will be slow if you are starting from 0 and requires
* an archival node.
* * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0`
* then start at the current round i.e. don't sync historical records, but once
* subscribing starts sync everything; note: if it falls behind it requires an
* archival node.
* * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1`
* using indexer (much faster than using algod for long time periods) and then
* use algod from there.
* * `fail`: Throw an error.
**/
syncBehaviour:
| 'skip-sync-newest'
| 'sync-oldest'
| 'sync-oldest-start-now'
| 'catchup-with-indexer'
| 'fail';
}
````
## TransactionFilter
[Section titled “TransactionFilter”](#transactionfilter)
The [`filters` parameter](#transactionsubscriptionparams) allows you to specify a set of filters to return a subset of transactions you are interested in. Each filter contains a `filter` property of type `TransactionFilter`, which matches the following type:
````typescript
/** Common parameters to control a single subscription pull/poll for both `AlgorandSubscriber` and `getSubscribedTransactions`. */
export interface CoreTransactionSubscriptionParams {
/** The filter(s) to apply to find transactions of interest.
* A list of filters with corresponding names.
*
* @example
* ```typescript
* filter: [{
* name: 'asset-transfers',
* filter: {
* type: TransactionType.axfer,
* //...
* }
* }, {
* name: 'payments',
* filter: {
* type: TransactionType.pay,
* //...
* }
* }]
* ```
*
*/
filters: NamedTransactionFilter[];
/** Any ARC-28 event definitions to process from app call logs */
arc28Events?: Arc28EventGroup[];
/** The maximum number of rounds to sync from algod for each subscription pull/poll.
*
* Defaults to 500.
*
* This gives you control over how many rounds you wait for at a time,
* your staleness tolerance when using `skip-sync-newest` or `fail`, and
* your catchup speed when using `sync-oldest`.
**/
maxRoundsToSync?: number;
/**
* The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'.
*
* By default there is no limit and it will paginate through all of the rounds.
* Sometimes this can result in an incredibly long catchup time that may break the service
* due to execution and memory constraints, particularly for filters that result in a large number of transactions.
*
* Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent
* boundary based on the number of rounds specified here.
*/
maxIndexerRoundsToSync?: number;
/** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync`
* past `watermark` then how should that be handled:
* * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful
* for real-time notification scenarios where you don't care about history and
* are happy to lose old transactions.
* * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds
* using algod; note: this will be slow if you are starting from 0 and requires
* an archival node.
* * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0`
* then start at the current round i.e. don't sync historical records, but once
* subscribing starts sync everything; note: if it falls behind it requires an
* archival node.
* * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1`
* using indexer (much faster than using algod for long time periods) and then
* use algod from there.
* * `fail`: Throw an error.
**/
syncBehaviour:
| 'skip-sync-newest'
| 'sync-oldest'
| 'sync-oldest-start-now'
| 'catchup-with-indexer'
| 'fail';
}
````
Each filter you provide within this type will apply an AND logic between the specified filters, e.g.
```typescript
filter: {
type: TransactionType.axfer,
sender: "ABC..."
}
```
Will return transactions that are `axfer` type AND have a sender of `"ABC..."`.
### NamedTransactionFilter
[Section titled “NamedTransactionFilter”](#namedtransactionfilter)
You can specify multiple filters in an array, where each filter is a `NamedTransactionFilter`, which consists of:
```typescript
/** Specify a named filter to apply to find transactions of interest. */
export interface NamedTransactionFilter {
/** The name to give the filter. */
name: string;
/** The filter itself. */
filter: TransactionFilter;
}
```
This gives you the ability to detect which filter got matched when a transaction is returned, noting that you can use the same name multiple times if there are multiple filters (aka OR logic) that comprise the same logical filter.
## Arc28EventGroup
[Section titled “Arc28EventGroup”](#arc28eventgroup)
The [`arc28Events` parameter](#transactionsubscriptionparams) allows you to define any ARC-28 events that may appear in subscribed transactions so they can either be subscribed to, or be processed and added to the resulting [subscribed transaction object](#subscribedtransaction).
## TransactionSubscriptionResult
[Section titled “TransactionSubscriptionResult”](#transactionsubscriptionresult)
The result of calling `getSubscribedTransactions` is a `TransactionSubscriptionResult`:
```typescript
/** The result of a single subscription pull/poll. */
export interface TransactionSubscriptionResult {
/** The round range that was synced from/to */
syncedRoundRange: [startRound: bigint, endRound: bigint];
/** The current detected tip of the configured Algorand blockchain. */
currentRound: bigint;
/** The watermark value that was retrieved at the start of the subscription poll. */
startingWatermark: bigint;
/** The new watermark value to persist for the next call to
* `getSubscribedTransactions` to continue the sync.
* Will be equal to `syncedRoundRange[1]`. Only persist this
* after processing (or in the same atomic transaction as)
* subscribed transactions to keep it reliable. */
newWatermark: bigint;
/** Any transactions that matched the given filter within
* the synced round range. This substantively uses the [indexer transaction
* format](hhttps://dev.algorand.co/reference/rest-apis/indexer#transaction)
* to represent the data with some additional fields.
*/
subscribedTransactions: SubscribedTransaction[];
/** The metadata about any blocks that were retrieved from algod as part
* of the subscription poll.
*/
blockMetadata?: BlockMetadata[];
}
/** Metadata about a block that was retrieved from algod. */
export interface BlockMetadata {
/** The base64 block hash. */
hash?: string;
/** The round of the block. */
round: bigint;
/** Block creation timestamp in seconds since epoch */
timestamp: number;
/** The genesis ID of the chain. */
genesisId: string;
/** The base64 genesis hash of the chain. */
genesisHash: string;
/** The base64 previous block hash. */
previousBlockHash?: string;
/** The base64 seed of the block. */
seed: string;
/** Fields relating to rewards */
rewards?: BlockRewards;
/** Count of parent transactions in this block */
parentTransactionCount: number;
/** Full count of transactions and inner transactions (recursively) in this block. */
fullTransactionCount: number;
/** Number of the next transaction that will be committed after this block. It is 0 when no transactions have ever been committed (since TxnCounter started being supported). */
txnCounter: bigint;
/** TransactionsRoot authenticates the set of transactions appearing in the block. More specifically, it's the root of a merkle tree whose leaves are the block's Txids, in lexicographic order. For the empty block, it's 0. Note that the TxnRoot does not authenticate the signatures on the transactions, only the transactions themselves. Two blocks with the same transactions but in a different order and with different signatures will have the same TxnRoot.
Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==\|[A-Za-z0-9+/]{3}=)?$" */
transactionsRoot: string;
/** TransactionsRootSHA256 is an auxiliary TransactionRoot, built using a vector commitment instead of a merkle tree, and SHA256 hash function instead of the default SHA512_256. This commitment can be used on environments where only the SHA256 function exists. */
transactionsRootSha256: string;
/** Fields relating to a protocol upgrade. */
upgradeState?: BlockUpgradeState;
/** Tracks the status of state proofs. */
stateProofTracking?: BlockStateProofTracking[];
/** Fields relating to voting for a protocol upgrade. */
upgradeVote?: BlockUpgradeVote;
/** Participation account data that needs to be checked/acted on by the network. */
participationUpdates?: ParticipationUpdates;
/** Address of the proposer of this block */
proposer?: string;
}
```
## SubscribedTransaction
[Section titled “SubscribedTransaction”](#subscribedtransaction)
The common model used to expose a transaction that is returned from a subscription is a `SubscribedTransaction`, which can be imported like so:
```typescript
import type { SubscribedTransaction } from '@algorandfoundation/algokit-subscriber/types';
```
This type is substantively, based on the `algosdk.indexerModels.Transaction`. While the indexer type is used, the subscriber itself doesn’t have to use indexer - any transactions it retrieves from algod are transformed to this common model type. Beyond the indexer type it has some modifications to:
* Make `id` required
* Add the `parentTransactionId` field so inner transactions have a reference to their parent
* Override the type of `innerTxns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too
* Add emitted ARC-28 events via `arc28Events`
* The list of filter(s) that caused the transaction to be matched
* The list of balanceChange(s) that occurred in the transaction
The definition of the type is:
```typescript
export class SubscribedTransaction extends algosdk.indexerModels.Transaction {
id: string;
/** The intra-round offset of the parent of this transaction (if it's an inner transaction). */
parentIntraRoundOffset?: number;
/** The transaction ID of the parent of this transaction (if it's an inner transaction). */
parentTransactionId?: string;
/** Inner transactions produced by application execution. */
innerTxns?: SubscribedTransaction[];
/** Any ARC-28 events emitted from an app call. */
arc28Events?: EmittedArc28Event[];
/** The names of any filters that matched the given transaction to result in it being 'subscribed'. */
filtersMatched?: string[];
/** The balance changes in the transaction. */
balanceChanges?: BalanceChange[];
constructor({
id,
parentIntraRoundOffset,
parentTransactionId,
innerTxns,
arc28Events,
filtersMatched,
balanceChanges,
...rest
}: Omit) {
super(rest);
this.id = id;
this.parentIntraRoundOffset = parentIntraRoundOffset;
this.parentTransactionId = parentTransactionId;
this.innerTxns = innerTxns;
this.arc28Events = arc28Events;
this.filtersMatched = filtersMatched;
this.balanceChanges = balanceChanges;
}
}
/** An emitted ARC-28 event extracted from an app call log. */
export interface EmittedArc28Event extends Arc28EventToProcess {
/** The ordered arguments extracted from the event that was emitted */
args: ABIValue[];
/** The named arguments extracted from the event that was emitted (where the arguments had a name defined) */
argsByName: Record;
}
/** An ARC-28 event to be processed */
export interface Arc28EventToProcess {
/** The name of the ARC-28 event group the event belongs to */
groupName: string;
/** The name of the ARC-28 event that was triggered */
eventName: string;
/** The signature of the event e.g. `EventName(type1,type2)` */
eventSignature: string;
/** The 4-byte hex prefix for the event */
eventPrefix: string;
/** The ARC-28 definition of the event */
eventDefinition: Arc28Event;
}
/** Represents a balance change effect for a transaction. */
export interface BalanceChange {
/** The address that the balance change is for. */
address: string;
/** The asset ID of the balance change, or 0 for Algos. */
assetId: bigint;
/** The amount of the balance change in smallest divisible unit or microAlgos. */
amount: bigint;
/** The roles the account was playing that led to the balance change */
roles: BalanceChangeRole[];
}
/** The role that an account was playing for a given balance change. */
export enum BalanceChangeRole {
/** Account was sending a transaction (sending asset and/or spending fee if asset `0`) */
Sender,
/** Account was receiving a transaction */
Receiver,
/** Account was having an asset amount closed to it */
CloseTo,
}
```
## Examples
[Section titled “Examples”](#examples)
Here are some examples of how to use this method:
### Real-time notification of transactions of interest at the tip of the chain discarding stale records
[Section titled “Real-time notification of transactions of interest at the tip of the chain discarding stale records”](#real-time-notification-of-transactions-of-interest-at-the-tip-of-the-chain-discarding-stale-records)
If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would drop old records and restart notifications from the new tip.
```typescript
const algorand = AlgorandClient.defaultLocalNet();
// You would need to implement getLastWatermark() to retrieve from a persistence store
const watermark = await getLastWatermark();
const subscription = await getSubscribedTransactions(
{
filters: [
{
name: 'filter1',
filter: {
sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU',
},
},
],
watermark,
maxRoundsToSync: 100,
onMaxRounds: 'skip-sync-newest',
},
algorand.client.algod,
);
if (subscription.subscribedTransactions.length > 0) {
// You would need to implement notifyTransactions to action the transactions
await notifyTransactions(subscription.subscribedTransactions);
}
// You would need to implement saveWatermark to persist the watermark to the persistence store
await saveWatermark(subscription.newWatermark);
```
### Real-time notification of transactions of interest at the tip of the chain with at least once delivery
[Section titled “Real-time notification of transactions of interest at the tip of the chain with at least once delivery”](#real-time-notification-of-transactions-of-interest-at-the-tip-of-the-chain-with-at-least-once-delivery)
If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would pick up where it left off and catch up using algod (note: you need to connect it to a archival node).
```typescript
const algorand = AlgorandClient.defaultLocalNet();
// You would need to implement getLastWatermark() to retrieve from a persistence store
const watermark = await getLastWatermark();
const subscription = await getSubscribedTransactions(
{
filters: [
{
name: 'filter1',
filter: {
sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU',
},
},
],
watermark,
maxRoundsToSync: 100,
onMaxRounds: 'sync-oldest-start-now',
},
algorand.client.algod,
);
if (subscription.subscribedTransactions.length > 0) {
// You would need to implement notifyTransactions to action the transactions
await notifyTransactions(subscription.subscribedTransactions);
}
// You would need to implement saveWatermark to persist the watermark to the persistence store
await saveWatermark(subscription.newWatermark);
```
### Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain
[Section titled “Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain”](#quickly-building-a-reliable-up-to-date-cache-index-of-all-transactions-of-interest-from-the-beginning-of-the-chain)
If you ran the following code on a cron schedule of (say) every 30 - 60 seconds it would create a cached index of all assets created by the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`). Given it uses indexer to catch up you can deploy this into a fresh environment with an empty database and it will catch up in seconds rather than days.
```typescript
const algorand = AlgorandClient.defaultLocalNet();
// You would need to implement getLastWatermark() to retrieve from a persistence store
const watermark = await getLastWatermark();
const subscription = await getSubscribedTransactions(
{
filters: [
{
name: 'filter1',
filter: {
type: TransactionType.acfg,
sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU',
assetCreate: true,
},
},
],
watermark,
maxRoundsToSync: 1000,
onMaxRounds: 'catchup-with-indexer',
},
algorand.client.algod,
algorand.client.indexer,
);
if (subscription.subscribedTransactions.length > 0) {
// You would need to implement saveTransactions to persist the transactions
await saveTransactions(subscription.subscribedTransactions);
}
// You would need to implement saveWatermark to persist the watermark to the persistence store
await saveWatermark(subscription.newWatermark);
```
# Testing Guide
The Algorand Python Testing framework provides powerful tools for testing Algorand Python smart contracts within a Python interpreter. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications.
```{note}
For all code examples in the _Testing Guide_ section, assume `context` is an instance of `AlgopyTestContext` obtained using the `algopy_testing_context()` context manager. All subsequent code is executed within this context.
```
```{mermaid}
graph TD
subgraph GA["Your Development Environment"]
A["algopy (type stubs)"]
B["algopy_testing (testing framework)
(You are here 📍)"]
C["puya (compiler)"]
end
subgraph GB["Your Algorand Project"]
D[Your Algorand Python contract]
end
D -->|type hints inferred from| A
D -->|compiled using| C
D -->|tested via| B
```
> *High-level overview of the relationship between your smart contracts project, Algorand Python Testing framework, Algorand Python type stubs, and the compiler*
The Algorand Python Testing framework streamlines unit testing of your Algorand Python smart contracts by offering functionality to:
1. Simulate the Algorand Virtual Machine (AVM) environment
2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types
3. Test smart contract classes, including their states, variables, and methods
4. Verify logic signatures and subroutines
5. Manage global state, local state, scratch slots, and boxes in test contexts
6. Simulate transactions and transaction groups, including inner transactions
7. Verify opcode behavior
By using this framework, you can ensure your Algorand Python smart contracts function correctly before deploying them to a live network.
Key features of the framework include:
* `AlgopyTestContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state
* AVM Type Simulation: Accurate representations of AVM types like `UInt64` and `Bytes`
* ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding
* Transaction Simulation: Ability to create and execute various transaction types
* State Management: Tools for managing and verifying global and local state changes
* Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing
The framework is designed to work seamlessly with Algorand Python smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain.
## Table of Contents
[Section titled “Table of Contents”](#table-of-contents)
```{toctree}
---
maxdepth: 3
---
concepts
avm-types
arc4-types
transactions
contract-testing
signature-testing
state-management
subroutines
opcodes
```
# ARC4 Types
These types are available under the `algopy.arc4` namespace. Refer to the [ARC4 specification](https://arc.algorand.foundation/ARCs/arc-0004) for more details on the spec.
```{hint}
Test context manager provides _value generators_ for ARC4 types. To access their _value generators_, use `{context_instance}.any.arc4` property. See more examples below.
```
```{note}
For all `algopy.arc4` types with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-python-testing`](https://github.com/algorandfoundation/algorand-python-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-python-testing/blob/main/CONTRIBUTING).
```
```{testsetup}
import algopy
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Unsigned Integers
[Section titled “Unsigned Integers”](#unsigned-integers)
```{testcode}
from algopy import arc4
# Integer types
uint8_value = arc4.UInt8(255)
uint16_value = arc4.UInt16(65535)
uint32_value = arc4.UInt32(4294967295)
uint64_value = arc4.UInt64(18446744073709551615)
... # instantiate test context
# Generate a random unsigned arc4 integer with default range
uint8 = context.any.arc4.uint8()
uint16 = context.any.arc4.uint16()
uint32 = context.any.arc4.uint32()
uint64 = context.any.arc4.uint64()
biguint128 = context.any.arc4.biguint128()
biguint256 = context.any.arc4.biguint256()
biguint512 = context.any.arc4.biguint512()
# Generate a random unsigned arc4 integer with specified range
uint8_custom = context.any.arc4.uint8(min_value=10, max_value=100)
uint16_custom = context.any.arc4.uint16(min_value=1000, max_value=5000)
uint32_custom = context.any.arc4.uint32(min_value=100000, max_value=1000000)
uint64_custom = context.any.arc4.uint64(min_value=1000000000, max_value=10000000000)
biguint128_custom = context.any.arc4.biguint128(min_value=1000000000000000, max_value=10000000000000000)
biguint256_custom = context.any.arc4.biguint256(min_value=1000000000000000000000000, max_value=10000000000000000000000000)
biguint512_custom = context.any.arc4.biguint512(min_value=10000000000000000000000000000000000, max_value=10000000000000000000000000000000000)
```
## Address
[Section titled “Address”](#address)
```{testcode}
from algopy import arc4
# Address type
address_value = arc4.Address("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ")
# Generate a random address
random_address = context.any.arc4.address()
# Access native underlaying type
native = random_address.native
```
## Dynamic Bytes
[Section titled “Dynamic Bytes”](#dynamic-bytes)
```{testcode}
from algopy import arc4
# Dynamic byte string
bytes_value = arc4.DynamicBytes(b"Hello, Algorand!")
# Generate random dynamic bytes
random_dynamic_bytes = context.any.arc4.dynamic_bytes(n=123) # n is the number of bits in the arc4 dynamic bytes
```
## String
[Section titled “String”](#string)
```{testcode}
from algopy import arc4
# UTF-8 encoded string
string_value = arc4.String("Hello, Algorand!")
# Generate random string
random_string = context.any.arc4.string(n=12) # n is the number of bits in the arc4 string
```
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# AVM Types
These types are available directly under the `algopy` namespace. They represent the basic AVM primitive types and can be instantiated directly or via *value generators*:
```{note}
For 'primitive `algopy` types such as `Account`, `Application`, `Asset`, `UInt64`, `BigUint`, `Bytes`, `Sting` with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-python-testing`](https://github.com/algorandfoundation/algorand-python-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-python-testing/blob/main/CONTRIBUTING).
```
```{testsetup}
import algopy
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## UInt64
[Section titled “UInt64”](#uint64)
```{testcode}
# Direct instantiation
uint64_value = algopy.UInt64(100)
# Instantiate test context
...
# Generate a random UInt64 value
random_uint64 = context.any.uint64()
# Specify a range
random_uint64 = context.any.uint64(min_value=1000, max_value=9999)
```
## Bytes
[Section titled “Bytes”](#bytes)
```{testcode}
# Direct instantiation
bytes_value = algopy.Bytes(b"Hello, Algorand!")
# Instantiate test context
...
# Generate random byte sequences
random_bytes = context.any.bytes()
# Specify the length
random_bytes = context.any.bytes(length=32)
```
## String
[Section titled “String”](#string)
```{testcode}
# Direct instantiation
string_value = algopy.String("Hello, Algorand!")
# Generate random strings
random_string = context.any.string()
# Specify the length
random_string = context.any.string(length=16)
```
## BigUInt
[Section titled “BigUInt”](#biguint)
```{testcode}
# Direct instantiation
biguint_value = algopy.BigUInt(100)
# Generate a random BigUInt value
random_biguint = context.any.biguint()
```
## Asset
[Section titled “Asset”](#asset)
```{testcode}
# Direct instantiation
asset = algopy.Asset(asset_id=1001)
# Instantiate test context
...
# Generate a random asset
random_asset = context.any.asset(
creator=..., # Optional: Creator account
name=..., # Optional: Asset name
unit_name=..., # Optional: Unit name
total=..., # Optional: Total supply
decimals=..., # Optional: Number of decimals
default_frozen=..., # Optional: Default frozen state
url=..., # Optional: Asset URL
metadata_hash=..., # Optional: Metadata hash
manager=..., # Optional: Manager address
reserve=..., # Optional: Reserve address
freeze=..., # Optional: Freeze address
clawback=... # Optional: Clawback address
)
# Get an asset by ID
asset = context.ledger.get_asset(asset_id=random_asset.id)
# Update an asset
context.ledger.update_asset(
random_asset,
name=..., # Optional: New asset name
total=..., # Optional: New total supply
decimals=..., # Optional: Number of decimals
default_frozen=..., # Optional: Default frozen state
url=..., # Optional: New asset URL
metadata_hash=..., # Optional: New metadata hash
manager=..., # Optional: New manager address
reserve=..., # Optional: New reserve address
freeze=..., # Optional: New freeze address
clawback=... # Optional: New clawback address
)
```
## Account
[Section titled “Account”](#account)
```{testcode}
# Direct instantiation
raw_address = 'PUYAGEGVCOEBP57LUKPNOCSMRWHZJSU4S62RGC2AONDUEIHC6P7FOPJQ4I'
account = algopy.Account(raw_address) # zero address by default
# Instantiate test context
...
# Generate a random account
random_account = context.any.account(
address=str(raw_address), # Optional: Specify a custom address, defaults to a random address
opted_asset_balances={}, # Optional: Specify opted asset balances as dict of assets to balance
opted_apps=[], # Optional: Specify opted apps as sequence of algopy.Application objects
balance=..., # Optional: Specify an initial balance
min_balance=..., # Optional: Specify a minimum balance
auth_address=..., # Optional: Specify an auth address
total_assets=..., # Optional: Specify the total number of assets
total_assets_created=..., # Optional: Specify the total number of created assets
total_apps_created=..., # Optional: Specify the total number of created applications
total_apps_opted_in=..., # Optional: Specify the total number of applications opted into
total_extra_app_pages=..., # Optional: Specify the total number of extra
)
# Generate a random account that is opted into a specific asset
mock_asset = context.any.asset()
mock_account = context.any.account(
opted_asset_balances={mock_asset: 123}
)
# Get an account by address
account = context.ledger.get_account(str(mock_account))
# Update an account
context.ledger.update_account(
mock_account,
balance=..., # Optional: New balance
min_balance=..., # Optional: New minimum balance
auth_address=context.any.account(), # Optional: New auth address
total_assets=..., # Optional: New total number of assets
total_created_assets=..., # Optional: New total number of created assets
total_apps_created=..., # Optional: New total number of created applications
total_apps_opted_in=..., # Optional: New total number of applications opted into
total_extra_app_pages=..., # Optional: New total number of extra application pages
rewards=..., # Optional: New rewards
status=... # Optional: New account status
)
# Check if an account is opted into a specific asset
opted_in = account.is_opted_in(mock_asset)
```
## Application
[Section titled “Application”](#application)
```{testcode}
# Direct instantiation
application = algopy.Application()
# Instantiate test context
...
# Generate a random application
random_app = context.any.application(
approval_program=algopy.Bytes(b''), # Optional: Specify a custom approval program
clear_state_program=algopy.Bytes(b''), # Optional: Specify a custom clear state program
global_num_uint=algopy.UInt64(1), # Optional: Number of global uint values
global_num_bytes=algopy.UInt64(1), # Optional: Number of global byte values
local_num_uint=algopy.UInt64(1), # Optional: Number of local uint values
local_num_bytes=algopy.UInt64(1), # Optional: Number of local byte values
extra_program_pages=algopy.UInt64(1), # Optional: Number of extra program pages
creator=context.default_sender # Optional: Specify the creator account
)
# Get an application by ID
app = context.ledger.get_app(app_id=random_app.id)
# Update an application
context.ledger.update_app(
random_app,
approval_program=..., # Optional: New approval program
clear_state_program=..., # Optional: New clear state program
global_num_uint=..., # Optional: New number of global uint values
global_num_bytes=..., # Optional: New number of global byte values
local_num_uint=..., # Optional: New number of local uint values
local_num_bytes=..., # Optional: New number of local byte values
extra_program_pages=..., # Optional: New number of extra program pages
creator=... # Optional: New creator account
)
# Patch logs for an application. When accessing via transactions or inner transaction related opcodes, will return the patched logs unless new logs where added into the transaction during execution.
test_app = context.any.application(logs=b"log entry" or [b"log entry 1", b"log entry 2"])
# Get app associated with the active contract
class MyContract(algopy.ARC4Contract):
...
contract = MyContract()
active_app = context.ledger.get_app(contract)
```
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# Concepts
The following sections provide an overview of key concepts and features in the Algorand Python Testing framework.
## Test Context
[Section titled “Test Context”](#test-context)
The main abstraction for interacting with the testing framework is the [`AlgopyTestContext`](../api-context#algopy_testing.AlgopyTestContext). It creates an emulated Algorand environment that closely mimics AVM behavior relevant to unit testing the contracts and provides a Pythonic interface for interacting with the emulated environment.
```python
from algopy_testing import algopy_testing_context
def test_my_contract():
# Recommended way to instantiate the test context
with algopy_testing_context() as ctx:
# Your test code here
pass
# ctx is automatically reset after the test code is executed
```
The context manager interface exposes three main properties:
1. `ledger`: An instance of `LedgerContext` for interacting with and querying the emulated Algorand ledger state.
2. `txn`: An instance of `TransactionContext` for creating and managing transaction groups, submitting transactions, and accessing transaction results.
3. `any`: An instance of `AlgopyValueGenerator` for generating randomized test data.
For detailed method signatures, parameters, and return types, refer to the following API sections:
* [`algopy_testing.LedgerContext`](../api)
* [`algopy_testing.TransactionContext`](../api)
* [`algopy_testing.AVMValueGenerator`, `algopy_testing.TxnValueGenerator`, `algopy_testing.ARC4ValueGenerator`](../api)
The `any` property provides access to different value generators:
* `AVMValueGenerator`: Base abstractions for AVM types. All methods are available directly on the instance returned from `any`.
* `TxnValueGenerator`: Accessible via `any.txn`, for transaction-related data.
* `ARC4ValueGenerator`: Accessible via `any.arc4`, for ARC4 type data.
These generators allow creation of constrained random values for various AVM entities (accounts, assets, applications, etc.) when specific values are not required.
```{hint}
Value generators are powerful tools for generating test data for specified AVM types. They allow further constraints on random value generation via arguments, making it easier to generate test data when exact values are not necessary.
When used with the 'Arrange, Act, Assert' pattern, value generators can be especially useful in setting up clear and concise test data in arrange steps.
They can also serve as a base building block that can be integrated/reused with popular Python property-based testing frameworks like [`hypothesis`](https://hypothesis.readthedocs.io/en/latest/).
```
## Types of `algopy` stub implementations
[Section titled “Types of algopy stub implementations”](#types-of-algopy-stub-implementations)
As explained in the [introduction](index), `algorand-python-testing` *injects* test implementations for stubs available in the `algorand-python` package. However, not all of the stubs are implemented in the same manner:
1. **Native**: Fully matches AVM computation in Python. For example, `algopy.op.sha256` and other cryptographic operations behave identically in AVM and unit tests. This implies that the majority of opcodes that are ‘pure’ functions in AVM also have a native Python implementation provided by this package. These abstractions and opcodes can be used within and outside of the testing context.
2. **Emulated**: Uses `AlgopyTestContext` to mimic AVM behavior. For example, `Box.put` on an `algopy.Box` within a test context stores data in the test manager, not the real Algorand network, but provides the same interface.
3. **Mockable**: Not implemented, but can be mocked or patched. For example, `algopy.abi_call` can be mocked to return specific values or behaviors; otherwise, it raises a `NotImplementedError`. This category covers cases where native or emulated implementation in a unit test context is impractical or overly complex.
For a full list of all public `algopy` types and their corresponding implementation category, refer to the [Coverage](coverage) section.
```plaintext
```
# Smart Contract Testing
This guide provides an overview of how to test smart contracts using the Algorand Python SDK (`algopy`). We will cover the basics of testing `ARC4Contract` and `Contract` classes, focusing on `abimethod` and `baremethod` decorators.

```{note}
The code snippets showcasing the contract testing capabilities are using [pytest](https://docs.pytest.org/en/latest/) as the test framework. However, note that the `algorand-python-testing` package can be used with any other test framework that supports Python. `pytest` is used for demonstration purposes in this documentation.
```
```{testsetup}
import algopy
import algopy_testing
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
The following video includes a practical tutorial of how to do tests in Algorand Smart Contracts:
[Algorand Smart Contract Testing - Python](https://www.youtube.com/embed/B4mzNmQB5mU?rel=0)
## `algopy.ARC4Contract`
[Section titled “algopy.ARC4Contract”](#algopyarc4contract)
Subclasses of `algopy.ARC4Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `algopy.Application` object instance.
Within the class implementation, methods decorated with `algopy.arc4.abimethod` and `algopy.arc4.baremethod` will automatically assemble an `algopy.gtxn.ApplicationCallTransaction` transaction to emulate the AVM application call. This behavior can be overriden by setting the transaction group manually as part of test setup, this is done via implicit invocation of `algopy_testing.context.any_application()` *value generator* (refer to [APIs](../apis) for more details).
```{testcode}
class SimpleVotingContract(algopy.ARC4Contract):
def __init__(self) -> None:
self.topic = algopy.GlobalState(algopy.Bytes(b"default_topic"), key="topic", description="Voting topic")
self.votes = algopy.GlobalState(
algopy.UInt64(0),
key="votes",
description="Votes for the option",
)
self.voted = algopy.LocalState(algopy.UInt64, key="voted", description="Tracks if an account has voted")
@algopy.arc4.abimethod(create="require")
def create(self, initial_topic: algopy.Bytes) -> None:
self.topic.value = initial_topic
self.votes.value = algopy.UInt64(0)
@algopy.arc4.abimethod
def vote(self) -> algopy.UInt64:
assert self.voted[algopy.Txn.sender] == algopy.UInt64(0), "Account has already voted"
self.votes.value += algopy.UInt64(1)
self.voted[algopy.Txn.sender] = algopy.UInt64(1)
return self.votes.value
@algopy.arc4.abimethod(readonly=True)
def get_votes(self) -> algopy.UInt64:
return self.votes.value
@algopy.arc4.abimethod
def change_topic(self, new_topic: algopy.Bytes) -> None:
assert algopy.Txn.sender == algopy.Txn.application_id.creator, "Only creator can change topic"
self.topic.value = new_topic
self.votes.value = algopy.UInt64(0)
# Reset user's vote (this is simplified per single user for the sake of example)
self.voted[algopy.Txn.sender] = algopy.UInt64(0)
# Arrange
initial_topic = algopy.Bytes(b"initial_topic")
contract = SimpleVotingContract()
contract.voted[context.default_sender] = algopy.UInt64(0)
# Act - Create the contract
contract.create(initial_topic)
# Assert - Check initial state
assert contract.topic.value == initial_topic
assert contract.votes.value == algopy.UInt64(0)
# Act - Vote
# The method `.vote()` is decorated with `algopy.arc4.abimethod`, which means it will assemble a transaction to emulate the AVM application call
result = contract.vote()
# Assert - you can access the corresponding auto generated application call transaction via test context
assert len(context.txn.last_group.txns) == 1
# Assert - Note how local and global state are accessed via regular python instance attributes
assert result == algopy.UInt64(1)
assert contract.votes.value == algopy.UInt64(1)
assert contract.voted[context.default_sender] == algopy.UInt64(1)
# Act - Change topic
new_topic = algopy.Bytes(b"new_topic")
contract.change_topic(new_topic)
# Assert - Check topic changed and votes reset
assert contract.topic.value == new_topic
assert contract.votes.value == algopy.UInt64(0)
assert contract.voted[context.default_sender] == algopy.UInt64(0)
# Act - Get votes (should be 0 after reset)
votes = contract.get_votes()
# Assert - Check votes
assert votes == algopy.UInt64(0)
```
For more examples of tests using `algopy.ARC4Contract`, see the [examples](../examples) section.
## \`algopy.Contract“
[Section titled “\`algopy.Contract“”](#algopycontract)
Subclasses of `algopy.Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `algopy.Application` object instance. This behavior is identical to `algopy.ARC4Contract` class instances.
Unlike `algopy.ARC4Contract`, `algopy.Contract` requires manual setup of the transaction context and explicit method calls. Alternatively, you can use `active_txn_overrides` to specify application arguments and foreign arrays without needing to create a full transaction group if your aim is to patch a specific active transaction related metadata.
Here’s an updated example demonstrating how to test a `Contract` class:
```{testcode}
import algopy
import pytest
from algopy_testing import AlgopyTestContext, algopy_testing_context
class CounterContract(algopy.Contract):
def __init__(self):
self.counter = algopy.UInt64(0)
@algopy.subroutine
def increment(self):
self.counter += algopy.UInt64(1)
return algopy.UInt64(1)
@algopy.arc4.baremethod
def approval_program(self):
return self.increment()
@algopy.arc4.baremethod
def clear_state_program(self):
return algopy.UInt64(1)
@pytest.fixture()
def context():
with algopy_testing_context() as ctx:
yield ctx
def test_counter_contract(context: AlgopyTestContext):
# Instantiate contract
contract = CounterContract()
# Set up the transaction context using active_txn_overrides
with context.txn.create_group(
active_txn_overrides={
"sender": context.default_sender,
"app_args": [algopy.Bytes(b"increment")],
}
):
# Invoke approval program
result = contract.approval_program()
# Assert approval program result
assert result == algopy.UInt64(1)
# Assert counter value
assert contract.counter == algopy.UInt64(1)
# Test clear state program
assert contract.clear_state_program() == algopy.UInt64(1)
def test_counter_contract_multiple_txns(context: AlgopyTestContext):
contract = CounterContract()
# For scenarios with multiple transactions, you can still use gtxns
extra_payment = context.any.txn.payment()
with context.txn.create_group(
gtxns=[
extra_payment,
context.any.txn.application_call(
sender=context.default_sender,
app_id=contract.app_id,
app_args=[algopy.Bytes(b"increment")],
),
],
active_txn_index=1 # Set the application call as the active transaction
):
result = contract.approval_program()
assert result == algopy.UInt64(1)
assert contract.counter == algopy.UInt64(1)
assert len(context.txn.last_group.txns) == 2
```
In this updated example:
1. We use `context.txn.create_group()` with `active_txn_overrides` to set up the transaction context for a single application call. This simplifies the process when you don’t need to specify a full transaction group.
2. The `active_txn_overrides` parameter allows you to specify `app_args` and other transaction fields directly, without creating a full `ApplicationCallTransaction` object.
3. For scenarios involving multiple transactions, you can still use the `gtxns` parameter to create a transaction group, as shown in the `test_counter_contract_multiple_txns` function.
4. The `app_id` is automatically set to the contract’s application ID, so you don’t need to specify it explicitly when using `active_txn_overrides`.
This approach provides more flexibility in setting up the transaction context for testing `Contract` classes, allowing for both simple single-transaction scenarios and more complex multi-transaction tests.
## Defer contract method invocation
[Section titled “Defer contract method invocation”](#defer-contract-method-invocation)
You can create deferred application calls for more complex testing scenarios where order of transactions needs to be controlled:
```python
def test_deferred_call(context):
contract = MyARC4Contract()
extra_payment = context.any.txn.payment()
extra_asset_transfer = context.any.txn.asset_transfer()
implicit_payment = context.any.txn.payment()
deferred_call = context.txn.defer_app_call(contract.some_method, implicit_payment)
with context.txn.create_group([extra_payment, deferred_call, extra_asset_transfer]):
result = deferred_call.submit()
print(context.txn.last_group) # [extra_payment, implicit_payment, app call, extra_asset_transfer]
```
A deferred application call prepares the application call transaction without immediately executing it. The call can be executed later by invoking the `.submit()` method on the deferred application call instance. As demonstrated in the example, you can also include the deferred call in a transaction group creation context manager to execute it as part of a larger transaction group. When `.submit()` is called, only the specific method passed to `defer_app_call()` will be executed.
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# AVM Opcodes
The [coverage](coverage) file provides a comprehensive list of all opcodes and their respective types, categorized as *Mockable*, *Emulated*, or *Native* within the `algorand-python-testing` package. This section highlights a **subset** of opcodes and types that typically require interaction with the test context manager.
`Native` opcodes are assumed to function as they do in the Algorand Virtual Machine, given their stateless nature. If you encounter issues with any `Native` opcodes, please raise an issue in the [`algorand-python-testing` repo](https://github.com/algorandfoundation/algorand-python-testing/issues/new/choose) or contribute a PR following the [Contributing](https://github.com/algorandfoundation/algorand-python-testing/blob/main/CONTRIBUTING) guide.
```{testsetup}
import algopy
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Implemented Types
[Section titled “Implemented Types”](#implemented-types)
These types are fully implemented in Python and behave identically to their AVM counterparts:
### 1. Cryptographic Operations
[Section titled “1. Cryptographic Operations”](#1-cryptographic-operations)
The following opcodes are demonstrated:
* `op.sha256`
* `op.keccak256`
* `op.ecdsa_verify`
```{testcode}
from algopy import op
# SHA256 hash
data = algopy.Bytes(b"Hello, World!")
hashed = op.sha256(data)
# Keccak256 hash
keccak_hashed = op.keccak256(data)
# ECDSA verification
message_hash = bytes.fromhex("f809fd0aa0bb0f20b354c6b2f86ea751957a4e262a546bd716f34f69b9516ae1")
sig_r = bytes.fromhex("18d96c7cda4bc14d06277534681ded8a94828eb731d8b842e0da8105408c83cf")
sig_s = bytes.fromhex("7d33c61acf39cbb7a1d51c7126f1718116179adebd31618c4604a1f03b5c274a")
pubkey_x = bytes.fromhex("f8140e3b2b92f7cbdc8196bc6baa9ce86cf15c18e8ad0145d50824e6fa890264")
pubkey_y = bytes.fromhex("bd437b75d6f1db67155a95a0da4b41f2b6b3dc5d42f7db56238449e404a6c0a3")
result = op.ecdsa_verify(op.ECDSA.Secp256r1, message_hash, sig_r, sig_s, pubkey_x, pubkey_y)
assert result
```
### 2. Arithmetic and Bitwise Operations
[Section titled “2. Arithmetic and Bitwise Operations”](#2-arithmetic-and-bitwise-operations)
The following opcodes are demonstrated:
* `op.addw`
* `op.bitlen`
* `op.getbit`
* `op.setbit_uint64`
```{testcode}
from algopy import op
# Addition with carry
result, carry = op.addw(algopy.UInt64(2**63), algopy.UInt64(2**63))
# Bitwise operations
value = algopy.UInt64(42)
bit_length = op.bitlen(value)
is_bit_set = op.getbit(value, 3)
new_value = op.setbit_uint64(value, 2, 1)
```
For a comprehensive list of all opcodes and types, refer to the [coverage](../coverage) page.
## Emulated Types Requiring Transaction Context
[Section titled “Emulated Types Requiring Transaction Context”](#emulated-types-requiring-transaction-context)
These types necessitate interaction with the transaction context:
### algopy.op.Global
[Section titled “algopy.op.Global”](#algopyopglobal)
```{testcode}
from algopy import op
class MyContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def check_globals(self) -> algopy.UInt64:
return op.Global.min_txn_fee + op.Global.min_balance
... # setup context (below assumes available under 'ctx' variable)
context.ledger.patch_global_fields(
min_txn_fee=algopy.UInt64(1000),
min_balance=algopy.UInt64(100000)
)
contract = MyContract()
result = contract.check_globals()
assert result == algopy.UInt64(101000)
```
### algopy.op.Txn
[Section titled “algopy.op.Txn”](#algopyoptxn)
```{testcode}
from algopy import op
class MyContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def check_txn_fields(self) -> algopy.arc4.Address:
return algopy.arc4.Address(op.Txn.sender)
... # setup context (below assumes available under 'ctx' variable)
contract = MyContract()
custom_sender = context.any.account()
with context.txn.create_group(active_txn_overrides={"sender": custom_sender}):
result = contract.check_txn_fields()
assert result == custom_sender
```
### algopy.op.AssetHoldingGet
[Section titled “algopy.op.AssetHoldingGet”](#algopyopassetholdingget)
```{testcode}
from algopy import op
class AssetContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def check_asset_holding(self, account: algopy.Account, asset: algopy.Asset) -> algopy.UInt64:
balance, _ = op.AssetHoldingGet.asset_balance(account, asset)
return balance
... # setup context (below assumes available under 'ctx' variable)
asset = context.any.asset(total=algopy.UInt64(1000000))
account = context.any.account(opted_asset_balances={asset.id: algopy.UInt64(5000)})
contract = AssetContract()
result = contract.check_asset_holding(account, asset)
assert result == algopy.UInt64(5000)
```
### algopy.op.AppGlobal
[Section titled “algopy.op.AppGlobal”](#algopyopappglobal)
```{testcode}
from algopy import op
class StateContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def set_and_get_state(self, key: algopy.Bytes, value: algopy.UInt64) -> algopy.UInt64:
op.AppGlobal.put(key, value)
return op.AppGlobal.get_uint64(key)
... # setup context (below assumes available under 'ctx' variable)
contract = StateContract()
key, value = algopy.Bytes(b"test_key"), algopy.UInt64(42)
result = contract.set_and_get_state(key, value)
assert result == value
stored_value = context.ledger.get_global_state(contract, key)
assert stored_value == 42
```
### algopy.op.Block
[Section titled “algopy.op.Block”](#algopyopblock)
```{testcode}
from algopy import op
class BlockInfoContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def get_block_seed(self) -> algopy.Bytes:
return op.Block.blk_seed(1000)
... # setup context (below assumes available under 'ctx' variable)
context.ledger.set_block(1000, seed=123456, timestamp=1625097600)
contract = BlockInfoContract()
seed = contract.get_block_seed()
assert seed == algopy.op.itob(123456)
```
### algopy.op.AcctParamsGet
[Section titled “algopy.op.AcctParamsGet”](#algopyopacctparamsget)
```{testcode}
from algopy import op
class AccountParamsContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def get_account_balance(self, account: algopy.Account) -> algopy.UInt64:
balance, exists = op.AcctParamsGet.acct_balance(account)
assert exists
return balance
... # setup context (below assumes available under 'ctx' variable)
account = context.any.account(balance=algopy.UInt64(1000000))
contract = AccountParamsContract()
balance = contract.get_account_balance(account)
assert balance == algopy.UInt64(1000000)
```
### algopy.op.AppParamsGet
[Section titled “algopy.op.AppParamsGet”](#algopyopappparamsget)
```{testcode}
class AppParamsContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def get_app_creator(self, app_id: algopy.Application) -> algopy.arc4.Address:
creator, exists = algopy.op.AppParamsGet.app_creator(app_id)
assert exists
return algopy.arc4.Address(creator)
... # setup context (below assumes available under 'ctx' variable)
contract = AppParamsContract()
app = context.any.application()
creator = contract.get_app_creator(app)
assert creator == context.default_sender
```
### algopy.op.AssetParamsGet
[Section titled “algopy.op.AssetParamsGet”](#algopyopassetparamsget)
```{testcode}
from algopy import op
class AssetParamsContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def get_asset_total(self, asset_id: algopy.UInt64) -> algopy.UInt64:
total, exists = op.AssetParamsGet.asset_total(asset_id)
assert exists
return total
... # setup context (below assumes available under 'ctx' variable)
asset = context.any.asset(total=algopy.UInt64(1000000), decimals=algopy.UInt64(6))
contract = AssetParamsContract()
total = contract.get_asset_total(asset.id)
assert total == algopy.UInt64(1000000)
```
### algopy.op.Box
[Section titled “algopy.op.Box”](#algopyopbox)
```{testcode}
from algopy import op
class BoxStorageContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def store_and_retrieve(self, key: algopy.Bytes, value: algopy.Bytes) -> algopy.Bytes:
op.Box.put(key, value)
retrieved_value, exists = op.Box.get(key)
assert exists
return retrieved_value
... # setup context (below assumes available under 'ctx' variable)
contract = BoxStorageContract()
key, value = algopy.Bytes(b"test_key"), algopy.Bytes(b"test_value")
result = contract.store_and_retrieve(key, value)
assert result == value
stored_value = context.ledger.get_box(contract, key)
assert stored_value == value.value
```
## Mockable Opcodes
[Section titled “Mockable Opcodes”](#mockable-opcodes)
These opcodes are mockable in `algorand-python-testing`, allowing for controlled testing of complex operations:
### algopy.compile\_contract
[Section titled “algopy.compile\_contract”](#algopycompile_contract)
```{testcode}
from unittest.mock import patch, MagicMock
import algopy
mocked_response = MagicMock()
mocked_response.local_bytes = algopy.UInt64(4)
class MockContract(algopy.Contract):
...
class ContractFactory(algopy.ARC4Contract):
...
@algopy.arc4.abimethod
def compile_and_get_bytes(self) -> algopy.UInt64:
contract_response = algopy.compile_contract(MockContract)
return contract_response.local_bytes
... # setup context (below assumes available under 'ctx' variable)
contract = ContractFactory()
with patch('algopy.compile_contract', return_value=mocked_response):
assert contract.compile_and_get_bytes() == 4
```
### algopy.arc4.abi\_call
[Section titled “algopy.arc4.abi\_call”](#algopyarc4abi_call)
```{testcode}
import unittest
from unittest.mock import patch, MagicMock
import algopy
import typing
class MockAbiCall:
def __call__(
self, *args: typing.Any, **_kwargs: typing.Any
) -> tuple[typing.Any, typing.Any]:
return (
algopy.arc4.UInt64(11),
MagicMock(),
)
def __getitem__(self, _item: object) -> typing.Self:
return self
class MyContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def my_method(self, arg1: algopy.UInt64, arg2: algopy.UInt64) -> algopy.UInt64:
return algopy.arc4.abi_call[algopy.arc4.UInt64]("my_other_method", arg1, arg2)[0].native
... # setup context (below assumes available under 'ctx' variable)
contract = MyContract()
with patch('algopy.arc4.abi_call', MockAbiCall()):
result = contract.my_method(algopy.UInt64(10), algopy.UInt64(1))
assert result == 11
```
### algopy.op.vrf\_verify
[Section titled “algopy.op.vrf\_verify”](#algopyopvrf_verify)
```{testcode}
from unittest.mock import patch, MagicMock
import algopy
def test_mock_vrf_verify():
mock_result = (algopy.Bytes(b'mock_output'), True)
with patch('algopy.op.vrf_verify', return_value=mock_result) as mock_vrf_verify:
result = algopy.op.vrf_verify(
algopy.op.VrfVerify.VrfAlgorand,
algopy.Bytes(b'proof'),
algopy.Bytes(b'message'),
algopy.Bytes(b'public_key')
)
assert result == mock_result
mock_vrf_verify.assert_called_once_with(
algopy.op.VrfVerify.VrfAlgorand,
algopy.Bytes(b'proof'),
algopy.Bytes(b'message'),
algopy.Bytes(b'public_key')
)
test_mock_vrf_verify()
```
### algopy.op.EllipticCurve
[Section titled “algopy.op.EllipticCurve”](#algopyopellipticcurve)
```{testcode}
from unittest.mock import patch, MagicMock
import algopy
def test_mock_elliptic_curve_add():
mock_result = algopy.Bytes(b'result')
with patch('algopy.op.EllipticCurve.add', return_value=mock_result) as mock_add:
result = algopy.op.EllipticCurve.add(
algopy.op.EC.BN254g1,
algopy.Bytes(b'a'),
algopy.Bytes(b'b')
)
assert result == mock_result
mock_add.assert_called_once_with(
algopy.op.EC.BN254g1,
algopy.Bytes(b'a'),
algopy.Bytes(b'b'),
)
test_mock_elliptic_curve_add()
```
These examples demonstrate how to mock key mockable opcodes in `algorand-python-testing`. Use similar techniques (in your preferred testing framework) for other mockable opcodes like `algopy.compile_logicsig`, `algopy.arc4.arc4_create`, and `algopy.arc4.arc4_update`.
Mocking these opcodes allows you to:
1. Control complex operations’ behavior not covered by *implemented* and *emulated* types.
2. Test edge cases and error conditions.
3. Isolate contract logic from external dependencies.
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# Testing Guide
The Algorand Python Testing framework provides powerful tools for testing Algorand Python smart contracts within a Python interpreter. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications.
```{note}
For all code examples in the _Testing Guide_ section, assume `context` is an instance of `AlgopyTestContext` obtained using the `algopy_testing_context()` context manager. All subsequent code is executed within this context.
```
```{mermaid}
graph TD
subgraph GA["Your Development Environment"]
A["algopy (type stubs)"]
B["algopy_testing (testing framework)
(You are here 📍)"]
C["puya (compiler)"]
end
subgraph GB["Your Algorand Project"]
D[Your Algorand Python contract]
end
D -->|type hints inferred from| A
D -->|compiled using| C
D -->|tested via| B
```
> *High-level overview of the relationship between your smart contracts project, Algorand Python Testing framework, Algorand Python type stubs, and the compiler*
The Algorand Python Testing framework streamlines unit testing of your Algorand Python smart contracts by offering functionality to:
1. Simulate the Algorand Virtual Machine (AVM) environment
2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types
3. Test smart contract classes, including their states, variables, and methods
4. Verify logic signatures and subroutines
5. Manage global state, local state, scratch slots, and boxes in test contexts
6. Simulate transactions and transaction groups, including inner transactions
7. Verify opcode behavior
By using this framework, you can ensure your Algorand Python smart contracts function correctly before deploying them to a live network.
Key features of the framework include:
* `AlgopyTestContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state
* AVM Type Simulation: Accurate representations of AVM types like `UInt64` and `Bytes`
* ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding
* Transaction Simulation: Ability to create and execute various transaction types
* State Management: Tools for managing and verifying global and local state changes
* Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing
The framework is designed to work seamlessly with Algorand Python smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain.
The following video includes a practical tutorial of how to do unit tests in Algorand:
[Algorand Smart Contract Testing - Python](https://www.youtube.com/embed/B4mzNmQB5mU?rel=0)
## Table of Contents
[Section titled “Table of Contents”](#table-of-contents)
```{toctree}
---
maxdepth: 3
---
concepts
avm-types
arc4-types
transactions
contract-testing
signature-testing
state-management
subroutines
opcodes
```
# Smart Signature Testing
Test Algorand smart signatures (LogicSigs) with ease using the Algorand Python Testing framework.
```{testsetup}
import algopy
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Define a LogicSig
[Section titled “Define a LogicSig”](#define-a-logicsig)
Use the `@logicsig` decorator to create a LogicSig:
```{testcode}
from algopy import logicsig, Account, Txn, Global, UInt64, Bytes
@logicsig
def hashed_time_locked_lsig() -> bool:
# LogicSig code here
return True # Approve transaction
```
## Execute and Test
[Section titled “Execute and Test”](#execute-and-test)
Use `AlgopyTestContext.execute_logicsig()` to run and verify LogicSigs:
```{testcode}
with context.txn.create_group([
context.any.txn.payment(),
]):
result = context.execute_logicsig(hashed_time_locked_lsig, algopy.Bytes(b"secret"))
assert result is True
```
`execute_logicsig()` returns a boolean:
* `True`: Transaction approved
* `False`: Transaction rejected
## Pass Arguments
[Section titled “Pass Arguments”](#pass-arguments)
Provide arguments to LogicSigs using `execute_logicsig()`:
```{testcode}
result = context.execute_logicsig(hashed_time_locked_lsig, algopy.Bytes(b"secret"))
```
Access arguments in the LogicSig with `algopy.op.arg()` opcode:
```{testcode}
@logicsig
def hashed_time_locked_lsig() -> bool:
secret = algopy.op.arg(0)
expected_hash = algopy.op.sha256(algopy.Bytes(b"secret"))
return algopy.op.sha256(secret) == expected_hash
# Example usage
secret = algopy.Bytes(b"secret")
assert context.execute_logicsig(hashed_time_locked_lsig, secret)
```
For more details on available operations, see the [coverage](../coverage).
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# State Management
`algorand-python-testing` provides tools to test state-related abstractions in Algorand smart contracts. This guide covers global state, local state, boxes, and scratch space management.
```{testsetup}
import algopy
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Global State
[Section titled “Global State”](#global-state)
Global state is represented as instance attributes on `algopy.Contract` and `algopy.ARC4Contract` classes.
```{testcode}
class MyContract(algopy.ARC4Contract):
def __init__(self):
self.state_a = algopy.GlobalState(algopy.UInt64, key="global_uint64")
self.state_b = algopy.UInt64(1)
# In your test
contract = MyContract()
contract.state_a.value = algopy.UInt64(10)
contract.state_b.value = algopy.UInt64(20)
```
## Local State
[Section titled “Local State”](#local-state)
Local state is defined similarly to global state, but accessed using account addresses as keys.
```{testcode}
class MyContract(algopy.ARC4Contract):
def __init__(self):
self.local_state_a = algopy.LocalState(algopy.UInt64, key="state_a")
# In your test
contract = MyContract()
account = context.any.account()
contract.local_state_a[account] = algopy.UInt64(10)
```
## Boxes
[Section titled “Boxes”](#boxes)
The framework supports various Box abstractions available in `algorand-python`.
```{testcode}
class MyContract(algopy.ARC4Contract):
def __init__(self):
self.box_map = algopy.BoxMap(algopy.Bytes, algopy.UInt64)
@algopy.arc4.abimethod()
def some_method(self, key_a: algopy.Bytes, key_b: algopy.Bytes, key_c: algopy.Bytes) -> None:
self.box = algopy.Box(algopy.UInt64, key=key_a)
self.box.value = algopy.UInt64(1)
self.box_map[key_b] = algopy.UInt64(1)
self.box_map[key_c] = algopy.UInt64(2)
# In your test
contract = MyContract()
key_a = b"key_a"
key_b = b"key_b"
key_c = b"key_c"
contract.some_method(algopy.Bytes(key_a), algopy.Bytes(key_b), algopy.Bytes(key_c))
# Access boxes
box_content = context.ledger.get_box(contract, key_a)
assert context.ledger.box_exists(contract, key_a)
# Set box content manually
with context.txn.create_group():
context.ledger.set_box(contract, key_a, algopy.op.itob(algopy.UInt64(1)))
```
## Scratch Space
[Section titled “Scratch Space”](#scratch-space)
Scratch space is represented as a list of 256 slots for each transaction.
```{testcode}
class MyContract(algopy.Contract, scratch_slots=(1, 2, algopy.urange(3, 20))):
def approval_program(self):
algopy.op.Scratch.store(1, algopy.UInt64(5))
assert algopy.op.Scratch.load_uint64(1) == algopy.UInt64(5)
return True
# In your test
contract = MyContract()
result = contract.approval_program()
assert result
scratch_space = context.txn.last_group.get_scratch_space()
assert scratch_space[1] == algopy.UInt64(5)
```
For more detailed information, explore the example contracts in the `examples/` directory, the [coverage](../coverage) page, and the [API documentation](../api).
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# Subroutines
Subroutines allow direct testing of internal contract logic without full application calls.
```{testsetup}
import algopy
import algopy_testing
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Overview
[Section titled “Overview”](#overview)
The `@algopy.subroutine` decorator exposes contract methods for isolated testing within the Algorand Python Testing framework. This enables focused validation of core business logic without the overhead of full application deployment and execution.
## Usage
[Section titled “Usage”](#usage)
1. Decorate internal methods with `@algopy.subroutine`:
```{testcode}
from algopy import subroutine, UInt64
class MyContract:
@subroutine
def calculate_value(self, input: UInt64) -> UInt64:
return input * UInt64(2)
```
2. Test the subroutine directly:
```{testcode}
def test_calculate_value(context: algopy_testing.AlgopyTestContext):
contract = MyContract()
result = contract.calculate_value(UInt64(5))
assert result == UInt64(10)
```
## Benefits
[Section titled “Benefits”](#benefits)
* Faster test execution
* Simplified debugging
* Focused unit testing of core logic
## Best Practices
[Section titled “Best Practices”](#best-practices)
* Use subroutines for complex internal calculations
* Prefer writing `pure` subroutines in ARC4Contract classes
* Combine with full application tests for comprehensive coverage
* Maintain realistic input and output types (e.g., `UInt64`, `Bytes`)
## Example
[Section titled “Example”](#example)
For a complete example, see the `simple_voting` contract in the [examples](../examples) section.
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# Transactions
The testing framework follows the Transaction definitions described in [`algorand-python` docs](https://algorand-python.readthedocs.io/en/latest/algorand_sdk/transactions.html). This section focuses on *value generators* and interactions with inner transactions, it also explains how the framework identifies *active* transaction group during contract method/subroutine/logicsig invocation.
```{testsetup}
import algopy
import algopy_testing
from algopy_testing import algopy_testing_context
# Create the context manager for snippets below
ctx_manager = algopy_testing_context()
# Enter the context
context = ctx_manager.__enter__()
```
## Group Transactions
[Section titled “Group Transactions”](#group-transactions)
Refers to test implementation of transaction stubs available under `algopy.gtxn.*` namespace. Available under [`algopy.TxnValueGenerator`](../api) instance accessible via `context.any.txn` property:
```{mermaid}
graph TD
A[TxnValueGenerator] --> B[payment]
A --> C[asset_transfer]
A --> D[application_call]
A --> E[asset_config]
A --> F[key_registration]
A --> G[asset_freeze]
A --> H[transaction]
```
```{testcode}
... # instantiate test context
# Generate a random payment transaction
pay_txn = context.any.txn.payment(
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
receiver=context.any.account(), # Required
amount=algopy.UInt64(1000000) # Required
)
# Generate a random asset transfer transaction
asset_transfer_txn = context.any.txn.asset_transfer(
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
receiver=context.any.account(), # Required
asset_id=algopy.UInt64(1), # Required
amount=algopy.UInt64(1000) # Required
)
# Generate a random application call transaction
app_call_txn = context.any.txn.application_call(
app_id=context.any.application(), # Required
app_args=[algopy.Bytes(b"arg1"), algopy.Bytes(b"arg2")], # Optional: Defaults to empty list if not provided
accounts=[context.any.account()], # Optional: Defaults to empty list if not provided
assets=[context.any.asset()], # Optional: Defaults to empty list if not provided
apps=[context.any.application()], # Optional: Defaults to empty list if not provided
approval_program_pages=[algopy.Bytes(b"approval_code")], # Optional: Defaults to empty list if not provided
clear_state_program_pages=[algopy.Bytes(b"clear_code")], # Optional: Defaults to empty list if not provided
scratch_space={0: algopy.Bytes(b"scratch")} # Optional: Defaults to empty dict if not provided
)
# Generate a random asset config transaction
asset_config_txn = context.any.txn.asset_config(
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
asset_id=algopy.UInt64(1), # Optional: If not provided, creates a new asset
total=1000000, # Required for new assets
decimals=0, # Required for new assets
default_frozen=False, # Optional: Defaults to False if not provided
unit_name="UNIT", # Optional: Defaults to empty string if not provided
asset_name="Asset", # Optional: Defaults to empty string if not provided
url="http://asset-url", # Optional: Defaults to empty string if not provided
metadata_hash=b"metadata_hash", # Optional: Defaults to empty bytes if not provided
manager=context.any.account(), # Optional: Defaults to sender if not provided
reserve=context.any.account(), # Optional: Defaults to zero address if not provided
freeze=context.any.account(), # Optional: Defaults to zero address if not provided
clawback=context.any.account() # Optional: Defaults to zero address if not provided
)
# Generate a random key registration transaction
key_reg_txn = context.any.txn.key_registration(
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
vote_pk=algopy.Bytes(b"vote_pk"), # Optional: Defaults to empty bytes if not provided
selection_pk=algopy.Bytes(b"selection_pk"), # Optional: Defaults to empty bytes if not provided
vote_first=algopy.UInt64(1), # Optional: Defaults to 0 if not provided
vote_last=algopy.UInt64(1000), # Optional: Defaults to 0 if not provided
vote_key_dilution=algopy.UInt64(10000) # Optional: Defaults to 0 if not provided
)
# Generate a random asset freeze transaction
asset_freeze_txn = context.any.txn.asset_freeze(
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
asset_id=algopy.UInt64(1), # Required
freeze_target=context.any.account(), # Required
freeze_state=True # Required
)
# Generate a random transaction of a specified type
generic_txn = context.any.txn.transaction(
type=algopy.TransactionType.Payment, # Required
sender=context.any.account(), # Optional: Defaults to context's default sender if not provided
receiver=context.any.account(), # Required for Payment
amount=algopy.UInt64(1000000) # Required for Payment
)
```
## Preparing for execution
[Section titled “Preparing for execution”](#preparing-for-execution)
When a smart contract instance (application) is interacted with on the Algorand network, it must be performed in relation to a specific transaction or transaction group where one or many transactions are application calls to target smart contract instances.
To emulate this behaviour, the `create_group` context manager is available on [`algopy.TransactionContext`](../api) instance that allows setting temporary transaction fields within a specific scope, passing in emulated transaction objects and identifying the active transaction index within the transaction group
```{testcode}
import algopy
from algopy_testing import AlgopyTestContext, algopy_testing_context
class SimpleContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def check_sender(self) -> algopy.arc4.Address:
return algopy.arc4.Address(algopy.Txn.sender)
...
# Create a contract instance
contract = SimpleContract()
# Use active_txn_overrides to change the sender
test_sender = context.any.account()
with context.txn.create_group(active_txn_overrides={"sender": test_sender}):
# Call the contract method
result = contract.check_sender()
assert result == test_sender
# Assert that the sender is the test_sender after exiting the
# transaction group context
assert context.txn.last_active.sender == test_sender
# Assert the size of last transaction group
assert len(context.txn.last_group.txns) == 1
```
## Inner Transaction
[Section titled “Inner Transaction”](#inner-transaction)
Inner transactions are AVM transactions that are signed and executed by AVM applications (instances of deployed smart contracts or signatures).
When testing smart contracts, to stay consistent with AVM, the framework \_does not allow you to submit inner transactions outside of contract/subroutine invocation, but you can interact with and manage inner transactions using the test context manager as follows:
```{testcode}
class MyContract(algopy.ARC4Contract):
@algopy.arc4.abimethod
def pay_via_itxn(self, asset: algopy.Asset) -> None:
algopy.itxn.Payment(
receiver=algopy.Txn.sender,
amount=algopy.UInt64(1)
).submit()
... # setup context (below assumes available under 'context' variable)
# Create a contract instance
contract = MyContract()
# Generate a random asset
asset = context.any.asset()
# Execute the contract method
contract.pay_via_itxn(asset=asset)
# Access the last submitted inner transaction
payment_txn = context.txn.last_group.last_itxn.payment
# Assert properties of the inner transaction
assert payment_txn.receiver == context.txn.last_active.sender
assert payment_txn.amount == algopy.UInt64(1)
# Access all inner transactions in the last group
for itxn in context.txn.last_group.itxn_groups[-1]:
# Perform assertions on each inner transaction
...
# Access a specific inner transaction group
first_itxn_group = context.txn.last_group.get_itxn_group(0)
first_payment_txn = first_itxn_group.payment(0)
```
In this example, we define a contract method `pay_via_itxn` that creates and submits an inner payment transaction. The test context automatically captures and stores the inner transactions submitted by the contract method.
Note that we don’t need to wrap the execution in a `create_group` context manager because the method is decorated with `@algopy.arc4.abimethod`, which automatically creates a transaction group for the method. The `create_group` context manager is only needed when you want to create more complex transaction groups or patch transaction fields for various transaction-related opcodes in AVM.
To access the submitted inner transactions:
1. Use `context.txn.last_group.last_itxn` to access the last submitted inner transaction of a specific type.
2. Iterate over all inner transactions in the last group using `context.txn.last_group.itxn_groups[-1]`.
3. Access a specific inner transaction group using `context.txn.last_group.get_itxn_group(index)`.
These methods provide type validation and will raise an error if the requested transaction type doesn’t match the actual type of the inner transaction.
## References
[Section titled “References”](#references)
* [API](../api) for more details on the test context manager and inner transactions related methods that perform implicit inner transaction type validation.
* [Examples](../examples) for more examples of smart contracts and associated tests that interact with inner transactions.
```{testcleanup}
ctx_manager.__exit__(None, None, None)
```
# Testing Guide
The Algorand TypeScript Testing framework provides powerful tools for testing Algorand TypeScript smart contracts within a Node.js environment. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications.
```{note}
For all code examples in the _Testing Guide_ section, assume `context` is an instance of `TestExecutionContext` obtained using the initialising an instance of `TestExecutionContext` class. All subsequent code is executed within this context.
```
The Algorand TypeScript Testing framework streamlines unit testing of your Algorand TypeScript smart contracts by offering functionality to:
1. Simulate the Algorand Virtual Machine (AVM) environment
2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types
3. Test smart contract classes, including their states, variables, and methods
4. Verify logic signatures and subroutines
5. Manage global state, local state, scratch slots, and boxes in test contexts
6. Simulate transactions and transaction groups, including inner transactions
7. Verify opcode behavior
By using this framework, you can ensure your Algorand TypeScript smart contracts function correctly before deploying them to a live network.
Key features of the framework include:
* `TestExecutionContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state
* AVM Type Simulation: Accurate representations of AVM types like `uint64` and `bytes`
* ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding
* Transaction Simulation: Ability to create and execute various transaction types
* State Management: Tools for managing and verifying global and local state changes
* Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing
The framework is designed to work seamlessly with Algorand TypeScript smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain.
## Table of Contents
[Section titled “Table of Contents”](#table-of-contents)
* [Concepts](./concepts)
* [AVM Types](./avm-types)
* [ARC4 Types](./arc4-types)
* [Transactions](./transactions)
* [Smart Contract Testing](./contract-testing)
* [Smart Signature Testing](./signature-testing)
* [State Management](./state-management)
* [AVM Opcodes](./opcodes)
# ARC4 Types
These types are available under the `arc4` namespace. Refer to the [ARC4 specification](https://arc.algorand.foundation/ARCs/arc-0004) for more details on the spec.
```{hint}
Test execution context provides _value generators_ for ARC4 types. To access their _value generators_, use `{context_instance}.any.arc4` property. See more examples below.
```
```{note}
For all `arc4` types with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-typescript-testing`](https://github.com/algorandfoundation/algorand-typescript-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-typescript-testing/blob/main/CONTRIBUTING).
```
```ts
import { arc4 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## Unsigned Integers
[Section titled “Unsigned Integers”](#unsigned-integers)
```ts
// Integer types
const uint8Value = new arc4.UintN8(255);
const uint16Value = new arc4.UintN16(65535);
const uint32Value = new arc4.UintN32(4294967295);
const uint64Value = new arc4.UintN64(18446744073709551615n);
// Generate a random unsigned arc4 integer with default range
const uint8 = ctx.any.arc4.uintN8();
const uint16 = ctx.any.arc4.uintN16();
const uint32 = ctx.any.arc4.uintN32();
const uint64 = ctx.any.arc4.uintN64();
const biguint128 = ctx.any.arc4.uintN128();
const biguint256 = ctx.any.arc4.uintN256();
const biguint512 = ctx.any.arc4.uintN512();
// Generate a random unsigned arc4 integer with specified range
const uint8Custom = ctx.any.arc4.uintN8(10, 100);
const uint16Custom = ctx.any.arc4.uintN16(1000, 5000);
const uint32Custom = ctx.any.arc4.uintN32(100000, 1000000);
const uint64Custom = ctx.any.arc4.uintN64(1000000000, 10000000000);
const biguint128Custom = ctx.any.arc4.uintN128(1000000000000000, 10000000000000000n);
const biguint256Custom = ctx.any.arc4.uintN256(
1000000000000000000000000n,
10000000000000000000000000n,
);
const biguint512Custom = ctx.any.arc4.uintN512(
10000000000000000000000000000000000n,
10000000000000000000000000000000000n,
);
```
## Address
[Section titled “Address”](#address)
```ts
// Address type
const addressValue = new arc4.Address('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ');
// Generate a random address
const randomAddress = ctx.any.arc4.address();
// Access native underlaying type
const native = randomAddress.native;
```
## Dynamic Bytes
[Section titled “Dynamic Bytes”](#dynamic-bytes)
```ts
// Dynamic byte string
const bytesValue = new arc4.DynamicBytes('Hello, Algorand!');
// Generate random dynamic bytes
const randomDynamicBytes = ctx.any.arc4.dynamicBytes(123); // n is the number of bits in the arc4 dynamic bytes
```
## String
[Section titled “String”](#string)
```ts
// UTF-8 encoded string
const stringValue = new arc4.Str('Hello, Algorand!');
// Generate random string
const randomString = ctx.any.arc4.str(12); // n is the number of bits in the arc4 string
```
```ts
// test cleanup
ctx.reset();
```
# AVM Types
These types are available directly under the `algorand-typescript` namespace. They represent the basic AVM primitive types and can be instantiated directly or via *value generators*:
```{note}
For 'primitive `algorand-typescript` types such as `Account`, `Application`, `Asset`, `uint64`, `biguint`, `bytes`, `string` with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-typescript-testing`](https://github.com/algorandfoundation/algorand-typescript-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-typescript-testing/blob/main/CONTRIBUTING).
```
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## uint64
[Section titled “uint64”](#uint64)
```ts
// Direct instantiation
const uint64Value = algots.Uint64(100);
// Generate a random UInt64 value
const randomUint64 = ctx.any.uint64();
// Specify a range
const randomUint64InRange = ctx.any.uint64(1000, 9999);
```
## bytes
[Section titled “bytes”](#bytes)
```ts
// Direct instantiation
const bytesValue = algots.Bytes('Hello, Algorand!');
// Generate random byte sequences
const randomBytes = ctx.any.bytes();
// Specify the length
const randomBytesOfLength = ctx.any.bytes(32);
```
## string
[Section titled “string”](#string)
```ts
// Direct instantiation
const stringValue = 'Hello, Algorand!';
// Generate random strings
const randomString = ctx.any.string();
// Specify the length
const randomStringOfLength = ctx.any.string(16);
```
## biguint
[Section titled “biguint”](#biguint)
```ts
// Direct instantiation
const biguintValue = algots.BigUint(100);
// Generate a random BigUInt value
const randomBiguint = ctx.any.biguint();
// Specify the min value
const randomBiguintOver = ctx.any.biguint(100n);
```
## Asset
[Section titled “Asset”](#asset)
```ts
// Direct instantiation
const asset = algots.Asset(1001);
// Generate a random asset
const randomAsset = ctx.any.asset({
clawback: ctx.any.account(), // Optional: Clawback address
creator: ctx.any.account(), // Optional: Creator account
decimals: 6, // Optional: Number of decimals
defaultFrozen: false, // Optional: Default frozen state
freeze: ctx.any.account(), // Optional: Freeze address
manager: ctx.any.account(), // Optional: Manager address
metadataHash: ctx.any.bytes(32), // Optional: Metadata hash
name: algots.Bytes(ctx.any.string()), // Optional: Asset name
reserve: ctx.any.account(), // Optional: Reserve address
total: 1000000, // Optional: Total supply
unitName: algots.Bytes(ctx.any.string()), // Optional: Unit name
url: algots.Bytes(ctx.any.string()), // Optional: Asset URL
});
// Get an asset by ID
const asset = ctx.ledger.getAsset(randomAsset.id);
// Update an asset
ctx.ledger.patchAssetData(randomAsset, {
clawback: ctx.any.account(), // Optional: New clawback address
creator: ctx.any.account(), // Optional: Creator account
decimals: 6, // Optional: New number of decimals
defaultFrozen: false, // Optional: Default frozen state
freeze: ctx.any.account(), // Optional: New freeze address
manager: ctx.any.account(), // Optional: New manager address
metadataHash: ctx.any.bytes(32), // Optional: New metadata hash
name: algots.Bytes(ctx.any.string()), // Optional: New asset name
reserve: ctx.any.account(), // Optional: New reserve address
total: 1000000, // Optional: New total supply
unitName: algots.Bytes(ctx.any.string()), // Optional: Unit name
url: algots.Bytes(ctx.any.string()), // Optional: New asset URL
});
```
## Account
[Section titled “Account”](#account)
```ts
// Direct instantiation
const rawAddress = algots.Bytes.fromBase32(
'PUYAGEGVCOEBP57LUKPNOCSMRWHZJSU4S62RGC2AONDUEIHC6P7FOPJQ4I',
);
const account = algots.Account(rawAddress); // zero address by default
// Generate a random account
const randomAccount = ctx.any.account({
address: rawAddress, // Optional: Specify a custom address, defaults to a random address
optedAssetBalances: new Map([]), // Optional: Specify opted asset balances as dict of assets to balance
optedApplications: [], // Optional: Specify opted apps as sequence of algopy.Application objects
totalAppsCreated: 0, // Optional: Specify the total number of created applications
totalAppsOptedIn: 0, // Optional: Specify the total number of applications opted into
totalAssets: 0, // Optional: Specify the total number of assets
totalAssetsCreated: 0, // Optional: Specify the total number of created assets
totalBoxBytes: 0, // Optional: Specify the total number of box bytes
totalBoxes: 0, // Optional: Specify the total number of boxes
totalExtraAppPages: 0, // Optional: Specify the total number of extra
totalNumByteSlice: 0, // Optional: Specify the total number of byte slices
totalNumUint: 0, // Optional: Specify the total number of uints
minBalance: 0, // Optional: Specify a minimum balance
balance: 0, // Optional: Specify an initial balance
authAddress: algots.Account(), // Optional: Specify an auth address,
});
// Generate a random account that is opted into a specific asset
const mockAsset = ctx.any.asset();
const mockAccount = ctx.any.account({
optedAssetBalances: new Map([[mockAsset.id, 123]]),
});
// Get an account by address
const account = ctx.ledger.getAccount(mockAccount);
// Update an account
ctx.ledger.patchAccountData(mockAccount, {
account: {
balance: 0, // Optional: New balance
minBalance: 0, // Optional: New minimum balance
authAddress: ctx.any.account(), // Optional: New auth address
totalAssets: 0, // Optional: New total number of assets
totalAssetsCreated: 0, // Optional: New total number of created assets
totalAppsCreated: 0, // Optional: New total number of created applications
totalAppsOptedIn: 0, // Optional: New total number of applications opted into
totalExtraAppPages: 0, // Optional: New total number of extra application pages
},
});
// Check if an account is opted into a specific asset
const optedIn = account.isOptedIn(mockAsset);
```
## Application
[Section titled “Application”](#application)
```ts
// Direct instantiation
const application = algots.Application();
// Generate a random application
const randomApp = ctx.any.application({
approvalProgram: algots.Bytes(''), // Optional: Specify a custom approval program
clearStateProgram: algots.Bytes(''), // Optional: Specify a custom clear state program
globalNumUint: 1, // Optional: Number of global uint values
globalNumBytes: 1, // Optional: Number of global byte values
localNumUint: 1, // Optional: Number of local uint values
localNumBytes: 1, // Optional: Number of local byte values
extraProgramPages: 1, // Optional: Number of extra program pages
creator: ctx.defaultSender, // Optional: Specify the creator account
});
// Get an application by ID
const app = ctx.ledger.getApplication(randomApp.id);
// Update an application
ctx.ledger.patchApplicationData(randomApp, {
application: {
approvalProgram: algots.Bytes(''), // Optional: New approval program
clearStateProgram: algots.Bytes(''), // Optional: New clear state program
globalNumUint: 1, // Optional: New number of global uint values
globalNumBytes: 1, // Optional: New number of global byte values
localNumUint: 1, // Optional: New number of local uint values
localNumBytes: 1, // Optional: New number of local byte values
extraProgramPages: 1, // Optional: New number of extra program pages
creator: ctx.defaultSender, // Optional: New creator account
},
});
// Patch logs for an application. When accessing via transactions or inner transaction related opcodes, will return the patched logs unless new logs where added into the transaction during execution.
const testApp = ctx.any.application({
appLogs: [algots.Bytes('log entry 1'), algots.Bytes('log entry 2')],
});
// Get app associated with the active contract
class MyContract extends algots.arc4.Contract {}
const contract = ctx.contract.create(MyContract);
const activeApp = ctx.ledger.getApplicationForContract(contract);
```
```ts
// test context clean up
ctx.reset();
```
# Concepts
The following sections provide an overview of key concepts and features in the Algorand TypeScript Testing framework.
## Test Context
[Section titled “Test Context”](#test-context)
The main abstraction for interacting with the testing framework is the [`TestExecutionContext`](../api#contexts). It creates an emulated Algorand environment that closely mimics AVM behavior relevant to unit testing the contracts and provides a TypeScript interface for interacting with the emulated environment.
```typescript
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
import { afterEach, describe, it } from 'vitest';
describe('MyContract', () => {
// Recommended way to instantiate the test context
const ctx = new TestExecutionContext();
afterEach(() => {
// ctx should be reset after each test is executed
ctx.reset();
});
it('test my contract', () => {
// Your test code here
});
});
```
The context manager interface exposes four main properties:
1. `contract`: An instance of `ContractContext` for creating instances of Contract under test and register them with the test execution context.
2. `ledger`: An instance of `LedgerContext` for interacting with and querying the emulated Algorand ledger state.
3. `txn`: An instance of `TransactionContext` for creating and managing transaction groups, submitting transactions, and accessing transaction results.
4. `any`: An instance of `AlgopyValueGenerator` for generating randomized test data.
The `any` property provides access to different value generators:
* `AvmValueGenerator`: Base abstractions for AVM types. All methods are available directly on the instance returned from `any`.
* `TxnValueGenerator`: Accessible via `any.txn`, for transaction-related data.
* `Arc4ValueGenerator`: Accessible via `any.arc4`, for ARC4 type data.
These generators allow creation of constrained random values for various AVM entities (accounts, assets, applications, etc.) when specific values are not required.
```{hint}
Value generators are powerful tools for generating test data for specified AVM types. They allow further constraints on random value generation via arguments, making it easier to generate test data when exact values are not necessary.
When used with the 'Arrange, Act, Assert' pattern, value generators can be especially useful in setting up clear and concise test data in arrange steps.
```
## Types of `algorand-typescript` stub implementations
[Section titled “Types of algorand-typescript stub implementations”](#types-of-algorand-typescript-stub-implementations)
As explained in the [introduction](index), `algorand-typescript-testing` *injects* test implementations for stubs available in the `algorand-typescript` package. However, not all of the stubs are implemented in the same manner:
1. **Native**: Fully matches AVM computation in Python. For example, `op.sha256` and other cryptographic operations behave identically in AVM and unit tests. This implies that the majority of opcodes that are ‘pure’ functions in AVM also have a native TypeScript implementation provided by this package. These abstractions and opcodes can be used within and outside of the testing context.
2. **Emulated**: Uses `TestExecutionContext` to mimic AVM behavior. For example, `Box.put` on an `Box` within a test context stores data in the test manager, not the real Algorand network, but provides the same interface.
3. **Mockable**: Not implemented, but can be mocked or patched. For example, `op.onlineStake` can be mocked to return specific values or behaviors; otherwise, it raises a `NotImplementedError`. This category covers cases where native or emulated implementation in a unit test context is impractical or overly complex.
# Smart Contract Testing
This guide provides an overview of how to test smart contracts using the [Algorand Typescript Testing package](https://www.npmjs.com/package/@algorandfoundation/algorand-typescript-testing). We will cover the basics of testing `arc4.Contract` and `BaseContract` classes, focusing on `abimethod` and `baremethod` decorators.
```{note}
The code snippets showcasing the contract testing capabilities are using [vitest](https://vitest.dev/) as the test framework. However, note that the `algorand-typescript-testing` package can be used with any other test framework that supports TypeScript. `vitest` is used for demonstration purposes in this documentation.
```
```ts
import { arc4 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
The following video includes a practical tutorial of how to do tests in Algorand:
[Algorand Smart Contract Testing - Typescript](https://www.youtube.com/embed/6SSga2FCg-c?rel=0)
## `arc4.Contract`
[Section titled “arc4.Contract”](#arc4contract)
Subclasses of `arc4.Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `Application` object instance.
Within the class implementation, methods decorated with `arc4.abimethod` and `arc4.baremethod` will automatically assemble an `gtxn.ApplicationTxn` transaction to emulate the AVM application call. This behavior can be overriden by setting the transaction group manually as part of test setup, this is done via implicit invocation of `ctx.any.txn.applicationCall` *value generator* (refer to [APIs](../apis) for more details).
```ts
class SimpleVotingContract extends arc4.Contract {
topic = GlobalState({ initialValue: Bytes('default_topic'), key: 'topic' });
votes = GlobalState({
initialValue: Uint64(0),
key: 'votes',
});
voted = LocalState({ key: 'voted' });
@arc4.abimethod({ onCreate: 'require' })
create(initialTopic: bytes) {
this.topic.value = initialTopic;
this.votes.value = Uint64(0);
}
@arc4.abimethod()
vote(): uint64 {
assert(this.voted(Txn.sender).value === 0, 'Account has already voted');
this.votes.value = this.votes.value + 1;
this.voted(Txn.sender).value = Uint64(1);
return this.votes.value;
}
@arc4.abimethod({ readonly: true })
getVotes(): uint64 {
return this.votes.value;
}
@arc4.abimethod()
changeTopic(newTopic: bytes) {
assert(Txn.sender === Txn.applicationId.creator, 'Only creator can change topic');
this.topic.value = newTopic;
this.votes.value = Uint64(0);
// Reset user's vote (this is simplified per single user for the sake of example)
this.voted(Txn.sender).value = Uint64(0);
}
}
// Arrange
const initialTopic = Bytes('initial_topic');
const contract = ctx.contract.create(SimpleVotingContract);
contract.voted(ctx.defaultSender).value = Uint64(0);
// Act - Create the topic
contract.create(initialTopic);
// Assert - Check initial state
expect(contract.topic.value).toEqual(initialTopic);
expect(contract.votes.value).toEqual(Uint64(0));
// Act - Vote
// The method `.vote()` is decorated with `algopy.arc4.abimethod`, which means it will assemble a transaction to emulate the AVM application call
const result = contract.vote();
// Assert - you can access the corresponding auto generated application call transaction via test context
expect(ctx.txn.lastGroup.transactions.length).toEqual(1);
// Assert - Note how local and global state are accessed via regular python instance attributes
expect(result).toEqual(1);
expect(contract.votes.value).toEqual(1);
expect(contract.voted(ctx.defaultSender).value).toEqual(1);
// Act - Change topic
const newTopic = Bytes('new_topic');
contract.changeTopic(newTopic);
// Assert - Check topic changed and votes reset
expect(contract.topic.value).toEqual(newTopic);
expect(contract.votes.value).toEqual(0);
expect(contract.voted(ctx.defaultSender).value).toEqual(0);
// Act - Get votes (should be 0 after reset)
const votes = contract.getVotes();
// Assert - Check votes
expect(votes).toEqual(0);
```
For more examples of tests using `arc4.Contract`, see the [examples](../examples) section.
## \`BaseContract“
[Section titled “\`BaseContract“”](#basecontract)
Subclasses of `BaseContract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `Application` object instance. This behavior is identical to `arc4.Contract` class instances.
Unlike `arc4.Contract`, `BaseContract` requires manual setup of the transaction context and explicit method calls.
Here’s an updated example demonstrating how to test a `BaseContract` class:
```ts
import { BaseContract, Bytes, GlobalState, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
import { afterEach, expect, test } from 'vitest';
class CounterContract extends BaseContract {
counter = GlobalState({ initialValue: Uint64(0) });
increment() {
this.counter.value = this.counter.value + 1;
return Uint64(1);
}
approvalProgram() {
return this.increment();
}
clearStateProgram() {
return Uint64(1);
}
}
const ctx = new TestExecutionContext();
afterEach(() => {
ctx.reset();
});
test('increment', () => {
// Instantiate contract
const contract = ctx.contract.create(CounterContract);
// Set up the transaction context using active_txn_overrides
ctx.txn
.createScope([
ctx.any.txn.applicationCall({
appId: contract,
sender: ctx.defaultSender,
appArgs: [Bytes('increment')],
}),
])
.execute(() => {
// Invoke approval program
const result = contract.approvalProgram();
// Assert approval program result
expect(result).toEqual(1);
// Assert counter value
expect(contract.counter.value).toEqual(1);
});
// Test clear state program
expect(contract.clearStateProgram()).toEqual(1);
});
test('increment with multiple txns', () => {
const contract = ctx.contract.create(CounterContract);
// For scenarios with multiple transactions, you can still use gtxns
const extraPayment = ctx.any.txn.payment();
ctx.txn
.createScope(
[
extraPayment,
ctx.any.txn.applicationCall({
sender: ctx.defaultSender,
appId: contract,
appArgs: [Bytes('increment')],
}),
],
1, // Set the application call as the active transaction
)
.execute(() => {
const result = contract.approvalProgram();
expect(result).toEqual(1);
expect(contract.counter.value).toEqual(1);
});
expect(ctx.txn.lastGroup.transactions.length).toEqual(2);
});
```
In this updated example:
1. We use `ctx.txn.createScope()` with `ctx.any.txn.applicationCall` to set up the transaction context for a single application call.
2. For scenarios involving multiple transactions, you can still use the `group` parameter to create a transaction group, as shown in the `test('increment with multiple txns', () => {})` function.
This approach provides more flexibility in setting up the transaction context for testing `Contract` classes, allowing for both simple single-transaction scenarios and more complex multi-transaction tests.
## Defer contract method invocation
[Section titled “Defer contract method invocation”](#defer-contract-method-invocation)
You can create deferred application calls for more complex testing scenarios where order of transactions needs to be controlled:
```ts
class MyARC4Contract extends arc4.Contract {
someMethod(payment: gtxn.PaymentTxn) {
return Uint64(1);
}
}
const ctx = new TestExecutionContext();
test('deferred call', () => {
const contract = ctx.contract.create(MyARC4Contract);
const extraPayment = ctx.any.txn.payment();
const extraAssetTransfer = ctx.any.txn.assetTransfer();
const implicitPayment = ctx.any.txn.payment();
const deferredCall = ctx.txn.deferAppCall(
contract,
contract.someMethod,
'someMethod',
implicitPayment,
);
ctx.txn.createScope([extraPayment, deferredCall, extraAssetTransfer]).execute(() => {
const result = deferredCall.submit();
});
console.log(ctx.txn.lastGroup); // [extra_payment, implicit_payment, app call, extra_asset_transfer]
});
```
A deferred application call prepares the application call transaction without immediately executing it. The call can be executed later by invoking the `.submit()` method on the deferred application call instance. As demonstrated in the example, you can also include the deferred call in a transaction group creation context manager to execute it as part of a larger transaction group. When `.submit()` is called, only the specific method passed to `defer_app_call()` will be executed.
```ts
// test cleanup
ctx.reset();
```
# AVM Opcodes
The [coverage](coverage) file provides a comprehensive list of all opcodes and their respective types, categorized as *Mockable*, *Emulated*, or *Native* within the `algorand-typescript-testing` package. This section highlights a **subset** of opcodes and types that typically require interaction with the test execution context.
`Native` opcodes are assumed to function as they do in the Algorand Virtual Machine, given their stateless nature. If you encounter issues with any `Native` opcodes, please raise an issue in the [`algorand-typescript-testing` repo](https://github.com/algorandfoundation/algorand-typescript-testing/issues/new/choose) or contribute a PR following the [Contributing](https://github.com/algorandfoundation/algorand-typescript-testing/blob/main/CONTRIBUTING) guide.
```ts
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## Implemented Types
[Section titled “Implemented Types”](#implemented-types)
These types are fully implemented in TypeScript and behave identically to their AVM counterparts:
### 1. Cryptographic Operations
[Section titled “1. Cryptographic Operations”](#1-cryptographic-operations)
The following opcodes are demonstrated:
* `op.sha256`
* `op.keccak256`
* `op.ecdsaVerify`
```ts
import { op } from '@algorandfoundation/algorand-typescript';
// SHA256 hash
const data = Bytes('Hello, World!');
const hashed = op.sha256(data);
// Keccak256 hash
const keccakHashed = op.keccak256(data);
// ECDSA verification
const messageHash = Bytes.fromHex(
'f809fd0aa0bb0f20b354c6b2f86ea751957a4e262a546bd716f34f69b9516ae1',
);
const sigR = Bytes.fromHex('18d96c7cda4bc14d06277534681ded8a94828eb731d8b842e0da8105408c83cf');
const sigS = Bytes.fromHex('7d33c61acf39cbb7a1d51c7126f1718116179adebd31618c4604a1f03b5c274a');
const pubkeyX = Bytes.fromHex('f8140e3b2b92f7cbdc8196bc6baa9ce86cf15c18e8ad0145d50824e6fa890264');
const pubkeyY = Bytes.fromHex('bd437b75d6f1db67155a95a0da4b41f2b6b3dc5d42f7db56238449e404a6c0a3');
const result = op.ecdsaVerify(op.Ecdsa.Secp256r1, messageHash, sigR, sigS, pubkeyX, pubkeyY);
expect(result).toBe(true);
```
### 2. Arithmetic and Bitwise Operations
[Section titled “2. Arithmetic and Bitwise Operations”](#2-arithmetic-and-bitwise-operations)
The following opcodes are demonstrated:
* `op.addw`
* `op.bitLength`
* `op.getBit`
* `op.setBit`
```ts
import { op, Uint64 } from '@algorandfoundation/algorand-typescript';
// Addition with carry
const [result, carry] = op.addw(Uint64(2n ** 63n), Uint64(2n ** 63n));
// Bitwise operations
const value = Uint64(42);
const bitLength = op.bitLength(value);
const isBitSet = op.getBit(value, 3);
const newValue = op.setBit(value, 2, 1);
```
For a comprehensive list of all opcodes and types, refer to the [coverage](../coverage) page.
## Emulated Types Requiring Transaction Context
[Section titled “Emulated Types Requiring Transaction Context”](#emulated-types-requiring-transaction-context)
These types necessitate interaction with the transaction context:
### algopy.op.Global
[Section titled “algopy.op.Global”](#algopyopglobal)
```ts
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
import { op, arc4, uint64, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class MyContract extends arc4.Contract {
@arc4.abimethod()
checkGlobals(): uint64 {
return op.Global.minTxnFee + op.Global.minBalance;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
ctx.ledger.patchGlobalData({
minTxnFee: 1000,
minBalance: 100000,
});
const contract = ctx.contract.create(MyContract);
const result = contract.checkGlobals();
expect(result).toEqual(101000);
```
### algopy.op.Txn
[Section titled “algopy.op.Txn”](#algopyoptxn)
```ts
import { op, arc4 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class MyContract extends arc4.Contract {
@arc4.abimethod()
checkTxnFields(): arc4.Address {
return new arc4.Address(op.Txn.sender);
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(MyContract);
const customSender = ctx.any.account();
ctx.txn.createScope([ctx.any.txn.applicationCall({ sender: customSender })]).execute(() => {
const result = contract.checkTxnFields();
expect(result).toEqual(customSender);
});
```
### algopy.op.AssetHoldingGet
[Section titled “algopy.op.AssetHoldingGet”](#algopyopassetholdingget)
```ts
import { Account, arc4, Asset, op, uint64, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class AssetContract extends arc4.Contract {
@arc4.abimethod()
checkAssetHolding(account: Account, asset: Asset): uint64 {
const [balance, _] = op.AssetHolding.assetBalance(account, asset);
return balance;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(AssetContract);
const asset = ctx.any.asset({ total: 1000000 });
const account = ctx.any.account({ optedAssetBalances: new Map([[asset.id, Uint64(5000)]]) });
const result = contract.checkAssetHolding(account, asset);
expect(result).toEqual(5000);
```
### algopy.op.AppGlobal
[Section titled “algopy.op.AppGlobal”](#algopyopappglobal)
```ts
import { arc4, bytes, Bytes, op, uint64, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class StateContract extends arc4.Contract {
@arc4.abimethod()
setAndGetState(key: bytes, value: uint64): uint64 {
op.AppGlobal.put(key, value);
return op.AppGlobal.getUint64(key);
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(StateContract);
const key = Bytes('test_key');
const value = Uint64(42);
const result = contract.setAndGetState(key, value);
expect(result).toEqual(value);
const [storedValue, _] = ctx.ledger.getGlobalState(contract, key);
expect(storedValue?.value).toEqual(42);
```
### algopy.op.Block
[Section titled “algopy.op.Block”](#algopyopblock)
```ts
import { arc4, bytes, op } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class BlockInfoContract extends arc4.Contract {
@arc4.abimethod()
getBlockSeed(): bytes {
return op.Block.blkSeed(1000);
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(BlockInfoContract);
ctx.ledger.patchBlockData(1000, { seed: op.itob(123456), timestamp: 1625097600 });
const seed = contract.getBlockSeed();
expect(seed).toEqual(op.itob(123456));
```
### algopy.op.AcctParamsGet
[Section titled “algopy.op.AcctParamsGet”](#algopyopacctparamsget)
```ts
import type { Account, uint64 } from '@algorandfoundation/algorand-typescript';
import { arc4, assert, op, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class AccountParamsContract extends arc4.Contract {
@arc4.abimethod()
getAccountBalance(account: Account): uint64 {
const [balance, exists] = op.AcctParams.acctBalance(account);
assert(exists);
return balance;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(AccountParamsContract);
const account = ctx.any.account({ balance: 1000000 });
const balance = contract.getAccountBalance(account);
expect(balance).toEqual(Uint64(1000000));
```
### algopy.op.AppParamsGet
[Section titled “algopy.op.AppParamsGet”](#algopyopappparamsget)
```ts
import type { Application } from '@algorandfoundation/algorand-typescript';
import { arc4, assert, op } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class AppParamsContract extends arc4.Contract {
@arc4.abimethod()
getAppCreator(appId: Application): arc4.Address {
const [creator, exists] = op.AppParams.appCreator(appId);
assert(exists);
return new arc4.Address(creator);
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(AppParamsContract);
const app = ctx.any.application();
const creator = contract.getAppCreator(app);
expect(creator).toEqual(ctx.defaultSender);
```
### algopy.op.AssetParamsGet
[Section titled “algopy.op.AssetParamsGet”](#algopyopassetparamsget)
```ts
import type { uint64 } from '@algorandfoundation/algorand-typescript';
import { arc4, assert, op } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class AssetParamsContract extends arc4.Contract {
@arc4.abimethod()
getAssetTotal(assetId: uint64): uint64 {
const [total, exists] = op.AssetParams.assetTotal(assetId);
assert(exists);
return total;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(AssetParamsContract);
const asset = ctx.any.asset({ total: 1000000, decimals: 6 });
const total = contract.getAssetTotal(asset.id);
expect(total).toEqual(1000000);
```
### algopy.op.Box
[Section titled “algopy.op.Box”](#algopyopbox)
```ts
import type { bytes } from '@algorandfoundation/algorand-typescript';
import { arc4, assert, Bytes, op } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class BoxStorageContract extends arc4.Contract {
@arc4.abimethod()
storeAndRetrieve(key: bytes, value: bytes): bytes {
op.Box.put(key, value);
const [retrievedValue, exists] = op.Box.get(key);
assert(exists);
return retrievedValue;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(BoxStorageContract);
const key = Bytes('test_key');
const value = Bytes('test_value');
const result = contract.storeAndRetrieve(key, value);
expect(result).toEqual(value);
const storedValue = ctx.ledger.getBox(contract, key);
expect(storedValue).toEqual(value);
```
### algopy.compile\_contract
[Section titled “algopy.compile\_contract”](#algopycompile_contract)
```ts
import { arc4, compile, uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class MockContract extends arc4.Contract {}
class ContractFactory extends arc4.Contract {
@arc4.abimethod()
compileAndGetBytes(): uint64 {
const contractResponse = compile(MockContract);
return compiled.localBytes;
}
}
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
const contract = ctx.contract.create(ContractFactory);
const mockApp = ctx.any.application({ localNumBytes: 4 });
ctx.setCompiledApp(MockContract, mockApp.id);
const result = contract.compileAndGetBytes();
expect(result).toBe(4);
```
## Mockable Opcodes
[Section titled “Mockable Opcodes”](#mockable-opcodes)
These opcodes are mockable in `algorand-typescript-testing`, allowing for controlled testing of complex operations. Note that the module being mocked is `@algorandfoundation/algorand-typescript-testing/internal` which holds the stub implementations of `algorand-typescript` functions to be executed in Node.js environment.
### algopy.op.vrf\_verify
[Section titled “algopy.op.vrf\_verify”](#algopyopvrf_verify)
```ts
import { expect, Mock, test, vi } from 'vitest';
import { bytes, Bytes, op, VrfVerify } from '@algorandfoundation/algorand-typescript';
vi.mock(
import('@algorandfoundation/algorand-typescript-testing/internal'),
async importOriginal => {
const mod = await importOriginal();
return {
...mod,
op: {
...mod.op,
vrfVerify: vi.fn(),
},
};
},
);
test('mock vrfVerify', () => {
const mockedVrfVerify = op.vrfVerify as Mock;
const mockResult = [Bytes('mock_output'), true] as readonly [bytes, boolean];
mockedVrfVerify.mockReturnValue(mockResult);
const result = op.vrfVerify(
VrfVerify.VrfAlgorand,
Bytes('proof'),
Bytes('message'),
Bytes('public_key'),
);
expect(result).toEqual(mockResult);
});
```
### algopy.op.EllipticCurve
[Section titled “algopy.op.EllipticCurve”](#algopyopellipticcurve)
```ts
import { expect, Mock, test, vi } from 'vitest';
import { Bytes, op } from '@algorandfoundation/algorand-typescript';
vi.mock(
import('@algorandfoundation/algorand-typescript-testing/internal'),
async importOriginal => {
const mod = await importOriginal();
return {
...mod,
op: {
...mod.op,
EllipticCurve: {
...mod.op.EllipticCurve,
add: vi.fn(),
},
},
};
},
);
test('mock EllipticCurve', () => {
const mockedEllipticCurveAdd = op.EllipticCurve.add as Mock;
const mockResult = Bytes('mock_output');
mockedEllipticCurveAdd.mockReturnValue(mockResult);
const result = op.EllipticCurve.add(op.Ec.BN254g1, Bytes('A'), Bytes('B'));
expect(result).toEqual(mockResult);
});
```
These examples demonstrate how to mock key mockable opcodes in `algorand-typescript-testing`. Use similar techniques (in your preferred testing framework) for other mockable opcodes like `mimc`, and `JsonRef`.
Mocking these opcodes allows you to:
1. Control complex operations’ behavior not covered by *implemented* and *emulated* types.
2. Test edge cases and error conditions.
3. Isolate contract logic from external dependencies.
```ts
// test cleanup
ctx.reset();
```
# Testing Guide
The Algorand TypeScript Testing framework provides powerful tools for testing Algorand TypeScript smart contracts within a Node.js environment. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications.
```{note}
For all code examples in the _Testing Guide_ section, assume `context` is an instance of `TestExecutionContext` obtained using the initialising an instance of `TestExecutionContext` class. All subsequent code is executed within this context.
```
The Algorand TypeScript Testing framework streamlines unit testing of your Algorand TypeScript smart contracts by offering functionality to:
1. Simulate the Algorand Virtual Machine (AVM) environment
2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types
3. Test smart contract classes, including their states, variables, and methods
4. Verify logic signatures and subroutines
5. Manage global state, local state, scratch slots, and boxes in test contexts
6. Simulate transactions and transaction groups, including inner transactions
7. Verify opcode behavior
By using this framework, you can ensure your Algorand TypeScript smart contracts function correctly before deploying them to a live network.
Key features of the framework include:
* `TestExecutionContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state
* AVM Type Simulation: Accurate representations of AVM types like `uint64` and `bytes`
* ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding
* Transaction Simulation: Ability to create and execute various transaction types
* State Management: Tools for managing and verifying global and local state changes
* Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing
The framework is designed to work seamlessly with Algorand TypeScript smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain.
The following video includes a practical tutorial of how to do tests in Algorand:
[Algorand Smart Contract Testing - Typescript](https://www.youtube.com/embed/6SSga2FCg-c?rel=0)
## Table of Contents
[Section titled “Table of Contents”](#table-of-contents)
* [Concepts](./concepts)
* [AVM Types](./avm-types)
* [ARC4 Types](./arc4-types)
* [Transactions](./transactions)
* [Smart Contract Testing](./contract-testing)
* [Smart Signature Testing](./signature-testing)
* [State Management](./state-management)
* [AVM Opcodes](./opcodes)
# Smart Signature Testing
Test Algorand smart signatures (LogicSigs) with ease using the Algorand TypeScript Testing framework.
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## Define a LogicSig
[Section titled “Define a LogicSig”](#define-a-logicsig)
Extend `algots.LogicSig` class to create a LogicSig:
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
class HashedTimeLockedLogicSig extends LogicSig {
program(): boolean {
// LogicSig code here
return true; // Approve transaction
}
}
```
## Execute and Test
[Section titled “Execute and Test”](#execute-and-test)
Use `ctx.executeLogicSig()` to run and verify LogicSigs:
```ts
ctx.txn.createScope([ctx.any.txn.payment()]).execute(() => {
const result = ctx.executeLogicSig(new HashedTimeLockedLogicSig(), Bytes('secret'));
expect(result).toBe(true);
});
```
`executeLogicSig()` returns a boolean:
* `true`: Transaction approved
* `false`: Transaction rejected
## Pass Arguments
[Section titled “Pass Arguments”](#pass-arguments)
Provide arguments to LogicSigs using `executeLogicSig()`:
```ts
const result = ctx.executeLogicSig(new HashedTimeLockedLogicSig(), Bytes('secret'));
```
Access arguments in the LogicSig with `algots.op.arg()` opcode:
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
class HashedTimeLockedLogicSig extends LogicSig {
program(): boolean {
// LogicSig code here
const secret = algots.op.arg(0);
const expectedHash = algots.op.sha256(algots.Bytes('secret'));
return algots.op.sha256(secret) === expectedHash;
}
}
// Example usage
const secret = algots.Bytes('secret');
expect(ctx.executeLogicSig(new HashedTimeLockedLogicSig(), secret));
```
For more details on available operations, see the [coverage](../coverage).
```ts
// test cleanup
ctx.reset();
```
# State Management
`algorand-typescript-testing` provides tools to test state-related abstractions in Algorand smart contracts. This guide covers global state, local state, boxes, and scratch space management.
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## Global State
[Section titled “Global State”](#global-state)
Global state is represented as instance attributes on `algots.Contract` and `algots.arc4.Contract` classes.
```ts
class MyContract extends algots.arc4.Contract {
stateA = algots.GlobalState({ key: 'globalStateA' });
stateB = algots.GlobalState({ initialValue: algots.Uint64(1), key: 'globalStateB' });
}
// In your test
const contract = ctx.contract.create(MyContract);
contract.stateA.value = algots.Uint64(10);
contract.stateB.value = algots.Uint64(20);
```
## Local State
[Section titled “Local State”](#local-state)
Local state is defined similarly to global state, but accessed using account addresses as keys.
```ts
class MyContract extends algots.arc4.Contract {
localStateA = algots.LocalState({ key: 'localStateA' });
}
// In your test
const contract = ctx.contract.create(MyContract);
const account = ctx.any.account();
contract.localStateA(account).value = algots.Uint64(10);
```
## Boxes
[Section titled “Boxes”](#boxes)
The framework supports various Box abstractions available in `algorand-typescript`.
```ts
class MyContract extends algots.arc4.Contract {
box: algots.Box | undefined;
boxMap = algots.BoxMap({ keyPrefix: 'boxMap' });
@algots.arc4.abimethod()
someMethod(keyA: algots.bytes, keyB: algots.bytes, keyC: algots.bytes) {
this.box = algots.Box({ key: keyA });
this.box.value = algots.Uint64(1);
this.boxMap.set(keyB, algots.Uint64(1));
this.boxMap.set(keyC, algots.Uint64(2));
}
}
// In your test
const contract = ctx.contract.create(MyContract);
const keyA = algots.Bytes('keyA');
const keyB = algots.Bytes('keyB');
const keyC = algots.Bytes('keyC');
contract.someMethod(keyA, keyB, keyC);
// Access boxes
const boxContent = ctx.ledger.getBox(contract, keyA);
expect(ctx.ledger.boxExists(contract, keyA)).toBe(true);
// Set box content manually
ctx.ledger.setBox(contract, keyA, algots.op.itob(algots.Uint64(1)));
```
## Scratch Space
[Section titled “Scratch Space”](#scratch-space)
Scratch space is represented as a list of 256 slots for each transaction.
```ts
@algots.contract({ scratchSlots: [1, 2, { from: 3, to: 20 }] })
class MyContract extends algots.Contract {
approvalProgram(): boolean {
algots.op.Scratch.store(1, algots.Uint64(5));
algots.assert(algots.op.Scratch.loadUint64(1) === algots.Uint64(5));
return true;
}
}
// In your test
const contract = ctx.contract.create(MyContract);
const result = contract.approvalProgram();
expect(result).toBe(true);
const scratchSpace = ctx.txn.lastGroup.getScratchSpace();
expect(scratchSpace[1]).toEqual(5);
```
For more detailed information, explore the example contracts in the `examples/` directory, the [coverage](../coverage) page, and the [API documentation](../api).
```ts
// test cleanup
ctx.reset();
```
# Transactions
The testing framework follows the Transaction definitions described in [`algorand-typescript` docs](https://github.com/algorandfoundation/puya-ts/blob/main/docs/lg-transactions). This section focuses on *value generators* and interactions with inner transactions, it also explains how the framework identifies *active* transaction group during contract method/subroutine/logicsig invocation.
```ts
import * as algots from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
// Create the context manager for snippets below
const ctx = new TestExecutionContext();
```
## Group Transactions
[Section titled “Group Transactions”](#group-transactions)
Refers to test implementation of transaction stubs available under `algots.gtxn.*` namespace. Available under [`TxnValueGenerator`](../code/value-generators/txn/classes/TxnValueGenerator) instance accessible via `ctx.any.txn` property:
```ts
// Generate a random payment transaction
const payTxn = ctx.any.txn.payment({
sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided
receiver: ctx.any.account(), // Required
amount: 1000000, // Required
});
// Generate a random asset transfer transaction
const assetTransferTxn = ctx.any.txn.assetTransfer({
sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided
assetReceiver: ctx.any.account(), // Required
xferAsset: ctx.any.asset({ assetId: 1 }), // Required
assetAmount: 1000, // Required
});
// Generate a random application call transaction
const appCallTxn = ctx.any.txn.applicationCall({
appId: ctx.any.application(), // Required
appArgs: [algots.Bytes('arg1'), algots.Bytes('arg2')], // Optional: Defaults to empty list if not provided
accounts: [ctx.any.account()], // Optional: Defaults to empty list if not provided
assets: [ctx.any.asset()], // Optional: Defaults to empty list if not provided
apps: [ctx.any.application()], // Optional: Defaults to empty list if not provided
approvalProgramPages: [algots.Bytes('approval_code')], // Optional: Defaults to empty list if not provided
clearStateProgramPages: [algots.Bytes('clear_code')], // Optional: Defaults to empty list if not provided
scratchSpace: { 0: algots.Bytes('scratch') }, // Optional: Defaults to empty dict if not provided
});
// Generate a random asset config transaction
const assetConfigTxn = ctx.any.txn.assetConfig({
sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided
configAsset: undefined, // Optional: If not provided, creates a new asset
total: 1000000, // Required for new assets
decimals: 0, // Required for new assets
defaultFrozen: false, // Optional: Defaults to False if not provided
unitName: algots.Bytes('UNIT'), // Optional: Defaults to empty string if not provided
assetName: algots.Bytes('Asset'), // Optional: Defaults to empty string if not provided
url: algots.Bytes('http://asset-url'), // Optional: Defaults to empty string if not provided
metadataHash: algots.Bytes('metadata_hash'), // Optional: Defaults to empty bytes if not provided
manager: ctx.any.account(), // Optional: Defaults to sender if not provided
reserve: ctx.any.account(), // Optional: Defaults to zero address if not provided
freeze: ctx.any.account(), // Optional: Defaults to zero address if not provided
clawback: ctx.any.account(), // Optional: Defaults to zero address if not provided
});
// Generate a random key registration transaction
const keyRegTxn = ctx.any.txn.keyRegistration({
sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided
voteKey: algots.Bytes('vote_pk'), // Optional: Defaults to empty bytes if not provided
selectionKey: algots.Bytes('selection_pk'), // Optional: Defaults to empty bytes if not provided
voteFirst: 1, // Optional: Defaults to 0 if not provided
voteLast: 1000, // Optional: Defaults to 0 if not provided
voteKeyDilution: 10000, // Optional: Defaults to 0 if not provided
});
// Generate a random asset freeze transaction
const assetFreezeTxn = ctx.any.txn.assetFreeze({
sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided
freezeAsset: ctx.ledger.getAsset(algots.Uint64(1)), // Required
freezeAccount: ctx.any.account(), // Required
frozen: true, // Required
});
```
## Preparing for execution
[Section titled “Preparing for execution”](#preparing-for-execution)
When a smart contract instance (application) is interacted with on the Algorand network, it must be performed in relation to a specific transaction or transaction group where one or many transactions are application calls to target smart contract instances.
To emulate this behaviour, the `createScope` context manager is available on [`TransactionContext`](../code/subcontexts/transaction-context/classes/TransactionContext) instance that allows setting temporary transaction fields within a specific scope, passing in emulated transaction objects and identifying the active transaction index within the transaction group
```ts
import { arc4, Txn } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class SimpleContract extends arc4.Contract {
@arc4.abimethod()
checkSender(): arc4.Address {
return new arc4.Address(Txn.sender);
}
}
const ctx = new TestExecutionContext();
// Create a contract instance
const contract = ctx.contract.create(SimpleContract);
// Use active_txn_overrides to change the sender
const testSender = ctx.any.account();
ctx.txn
.createScope([ctx.any.txn.applicationCall({ appId: contract, sender: testSender })])
.execute(() => {
// Call the contract method
const result = contract.checkSender();
expect(result).toEqual(testSender);
});
// Assert that the sender is the test_sender after exiting the
// transaction group context
expect(ctx.txn.lastActive.sender).toEqual(testSender);
// Assert the size of last transaction group
expect(ctx.txn.lastGroup.transactions.length).toEqual(1);
```
## Inner Transaction
[Section titled “Inner Transaction”](#inner-transaction)
Inner transactions are AVM transactions that are signed and executed by AVM applications (instances of deployed smart contracts or signatures).
When testing smart contracts, to stay consistent with AVM, the framework \_does not allow you to submit inner transactions outside of contract/subroutine invocation, but you can interact with and manage inner transactions using the test execution context as follows:
```ts
import { arc4, Asset, itxn, Txn, Uint64 } from '@algorandfoundation/algorand-typescript';
import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing';
class MyContract extends arc4.Contract {
@arc4.abimethod()
payViaItxn(asset: Asset) {
itxn
.payment({
receiver: Txn.sender,
amount: 1,
})
.submit();
}
}
// setup context
const ctx = new TestExecutionContext();
// Create a contract instance
const contract = ctx.contract.create(MyContract);
// Generate a random asset
const asset = ctx.any.asset();
// Execute the contract method
contract.payViaItxn(asset);
// Access the last submitted inner transaction
const paymentTxn = ctx.txn.lastGroup.lastItxnGroup().getPaymentInnerTxn();
// Assert properties of the inner transaction
expect(paymentTxn.receiver).toEqual(ctx.txn.lastActive.sender);
expect(paymentTxn.amount).toEqual(1);
// Access all inner transactions in the last group
ctx.txn.lastGroup.itxnGroups.at(-1)?.itxns.forEach(itxn => {
// Perform assertions on each inner transaction
expect(itxn.type).toEqual(TransactionType.Payment);
});
// Access a specific inner transaction group
const firstItxnGroup = ctx.txn.lastGroup.getItxnGroup(0);
const firstPaymentTxn = firstItxnGroup.getPaymentInnerTxn(0);
expect(firstPaymentTxn.type).toEqual(TransactionType.Payment);
```
In this example, we define a contract method `payViaItxn` that creates and submits an inner payment transaction. The test execution context automatically captures and stores the inner transactions submitted by the contract method.
Note that we don’t need to wrap the execution in a `createScope` context manager because the method is decorated with `@arc4.abimethod`, which automatically creates a transaction group for the method. The `createScope` context manager is only needed when you want to create more complex transaction groups or patch transaction fields for various transaction-related opcodes in AVM.
To access the submitted inner transactions:
1. Use `ctx.txn.lastGroup.lastItxnGroup().getPaymentInnerTxn()` to access the last submitted inner transaction of a specific type, in this case payment transaction.
2. Iterate over all inner transactions in the last group using `ctx.txn.lastGroup.itxnGroups.at(-1)?.itxns`.
3. Access a specific inner transaction group using `ctx.txn.lastGroup.getItxnGroup(index)`.
These methods provide type validation and will raise an error if the requested transaction type doesn’t match the actual type of the inner transaction.
## References
[Section titled “References”](#references)
* [API](../api) for more details on the test context manager and inner transactions related methods that perform implicit inner transaction type validation.
* [Examples](../examples) for more examples of smart contracts and associated tests that interact with inner transactions.
```ts
// test cleanup
ctx.reset();
```
# AlgoKit Clients
When building on Algorand, you need reliable ways to communicate with the blockchain—sending transactions, interacting with smart contracts, and accessing blockchain data. AlgoKit Utils clients provide straightforward, developer-friendly interfaces for these interactions, reducing the complexity typically associated with blockchain development. This guide explains how to use these clients to simplify common Algorand development tasks, whether you’re sending a basic transaction or deploying complex smart contracts.
AlgoKit offers two main types of clients to interact with the Algorand blockchain:
1. **Algorand Client** - A general-purpose client for all Algorand interactions, including:
* Crafting, grouping, and sending transactions through a fluent interface of chained methods
* Accessing network services through REST API clients for algod, indexer, and kmd
* Configuring connection and transaction parameters with sensible defaults and optional overrides
2. **Typed Application Client** - A specialized, auto-generated client for interacting with specific smart contracts:
* Provides type-safe interfaces generated from [ARC-56](/arc-standards/arc-0056) or [ARC-32](/arc-standards/arc-0032) contract specification files
* Enables IntelliSense-driven development experience that includes the smart contract methods
* Reduces errors through real-time type checking of arguments provided to smart contract methods
Let’s explore each client type in detail.
## Algorand Client: Gateway to the Blockchain
[Section titled “Algorand Client: Gateway to the Blockchain”](#algorand-client-gateway-to-the-blockchain)
The `AlgorandClient` serves as your primary entry point for all Algorand operations. Think of it as your Swiss Army knife for blockchain interactions.
### Getting Started with AlgorandClient
[Section titled “Getting Started with AlgorandClient”](#getting-started-with-algorandclient)
You can create an AlgorandClient instance in several ways, depending on your needs:
* TypeScript
```ts
// Point to the network configured through environment variables or
// if no environment variables it will point to the default LocalNet
// configuration
const client1 = AlgorandClient.fromEnvironment()
// Point to default LocalNet configuration
const client2 = AlgorandClient.defaultLocalNet()
// Point to TestNet using AlgoNode free tier
const client3 = AlgorandClient.testNet()
// Point to MainNet using AlgoNode free tier
const client4 = AlgorandClient.mainNet()
// Point to a pre-created algod client
const client5 = AlgorandClient.fromClients({ algod })
// Point to pre-created algod, indexer and kmd clients
const client6 = AlgorandClient.fromClients({ algod, indexer, kmd })
// Point to custom configuration for algod
const client7 = AlgorandClient.fromConfig({
algodConfig: {
server: 'http://localhost',
port: '4001',
token: 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
},
})
// Point to custom configuration for algod, indexer and kmd
const client8 = AlgorandClient.fromConfig({
algodConfig: algodConfig,
indexerConfig: indexerConfig,
kmdConfig: kmdConfig,
})
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L33)
* Python
```py
# Point to the network configured through environment variables or
# if no environment variables it will point to the default LocalNet
# configuration
algorand_client = AlgorandClient.from_environment()
# Point to default LocalNet configuration
algorand_client = AlgorandClient.default_localnet()
# Point to TestNet using AlgoNode free tier
algorand_client = AlgorandClient.testnet()
# Point to MainNet using AlgoNode free tier
algorand_client = AlgorandClient.mainnet()
# Point to a pre-created algod client
algorand_client = AlgorandClient.from_clients(algod)
# Point to pre-created algod, indexer and kmd clients
algorand_client = AlgorandClient.from_clients(algod, indexer, kmd)
# Point to custom configuration for algod
algorand_client = AlgorandClient.from_config(
AlgoClientNetworkConfig(
server="http://localhost",
token="4001",
)
)
# Point to custom configuration for algod, indexer and kmd
algorand_client = AlgorandClient.from_config(
algod_config, indexer_config, kmd_config
)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L28)
These factory methods make it easy to connect to different Algorand networks without manually configuring connection details.
Once you have an `AlgorandClient` instance, you can access the REST API clients for the various Algorand APIs via the `AlgorandClient.client` property:
* TypeScript
```ts
const algorandClient = AlgorandClient.fromEnvironment()
const algodClient = algorandClient.client.algod
const indexerClient = algorandClient.client.indexer
const kmdClient = algorandClient.client.kmd
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L64)
* Python
```py
algod = algorand_client.client.algod
indexer = algorand_client.client.indexer
kmd = algorand_client.client.kmd
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L56)
For more information about the functionalities of the REST API clients, refer to the following pages:
[algod API Reference ](/reference/rest-api/algod)Interact with Algorand nodes, submit transactions, and get blockchain status
[Indexer API Reference ](/reference/rest-api/indexer)Query historical transactions, account information, and blockchain data
[kmd API Reference ](/reference/rest-api/kmd)Manage wallets and keys (primarily for development environments)
### Understanding AlgorandClient’s Stateful Design
[Section titled “Understanding AlgorandClient’s Stateful Design”](#understanding-algorandclients-stateful-design)
The `AlgorandClient` is “stateful”, meaning that it caches various information that are reused multiple times. This allows the `AlgorandClient` to avoid redundant requests to the blockchain and to provide a more efficient interface for interacting with the blockchain. This is an important concept to understand before using the `AlgorandClient`.
#### Account Signer Caching
[Section titled “Account Signer Caching”](#account-signer-caching)
When sending transactions, you need to sign them with a private key. `AlgorandClient` can cache these signing capabilities, eliminating the need to provide signing information for every transaction, as you can see in the following example:
* TypeScript
```ts
/*
* If you don't want the Algorand client to cache the signer,
* you can manually provide the signer.
*/
await algorand.send.payment({
sender: randomAccountA,
receiver: randomAccountB,
amount: AlgoAmount.Algo(1),
signer: randomAccountA.signer, // The signer must be manually provided
})
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L72)
* Python
```py
"""
If you don't want the Algorand client to cache the signer,
you can manually provide the signer.
"""
algorand_client.send.payment(
PaymentParams(
sender=account_a.address,
receiver=account_b.address,
amount=AlgoAmount(algo=1),
signer=account_a.signer, # The signer must be manually provided
)
)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L64)
The same example, but with different approaches to signer caching demonstrated:
* TypeScript
```ts
/*
* By setting signers of accounts to the algorand client, the client will cache the signers
* and use them to sign transactions when the sender is one of the accounts.
*/
// If no signer is provided, the client will use the default signer
algorand.setDefaultSigner(randomAccountA.signer)
// If you have an address and a signer, use this method to set the signer
algorand.setSigner(randomAccountA.addr, randomAccountA.signer)
// If you have a `SigningAccount` object, use this method to set the signer
algorand.setSignerFromAccount(randomAccountA)
/*
* The Algorand client can directly send this payment transaction without
* needing a signer because it is tracking the signer for account_a.
*/
await algorand.send.payment({
sender: randomAccountA,
receiver: randomAccountB,
amount: AlgoAmount.Algo(1),
})
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L85)
* Python
```py
"""
By setting signers of accounts to the algorand client, the client will cache the signers
and use them to sign transactions when the sender is one of the accounts.
"""
# If no signer is provided, the client will use the default signer
algorand_client.set_default_signer(account_a.signer)
# If you have an address and a signer, use this method to set the signer
algorand_client.set_signer(account_a.address, account_a.signer)
# If you have a `SigningAccount` object, use this method to set the signer
algorand_client.set_signer_from_account(account_a)
"""
The Algorand client can directly send this payment transaction without
needing a signer because it is tracking the signer for account_a.
"""
algorand_client.send.payment(
PaymentParams(
sender=account_a.address,
receiver=account_b.address,
amount=AlgoAmount(algo=1),
)
)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L80)
This caching mechanism simplifies your code, especially when sending multiple transactions from the same account.
#### Suggested Parameter Caching
[Section titled “Suggested Parameter Caching”](#suggested-parameter-caching)
`AlgorandClient` caches network provided transaction values ([suggested parameters](/reference/rest-api/algod#transactionparams)) for you automatically to reduce network traffic. It has a set of default configurations that control this behavior, but you have the ability to override and change the configuration of this behavior.
##### What Are Suggested Parameters?
[Section titled “What Are Suggested Parameters?”](#what-are-suggested-parameters)
In Algorand, every transaction requires a set of network-specific parameters that define how the transaction should be processed. These “suggested parameters” include:
* **Fee:** The transaction fee (in microAlgos)
* **First Valid Round:** The first blockchain round where the transaction can be processed
* **Last Valid Round:** The last blockchain round where the transaction can be processed (after this, the transaction expires)
* **Genesis ID:** The identifier for the Algorand network (e.g., “mainnet-v1.0”)
* **Genesis Hash:** The hash of the genesis block for the network
* **Min Fee:** The minimum fee required by the network
These parameters are called “suggested” because the network provides recommended values, but developers can modify them (for example, to increase the fee during network congestion).
##### Why Cache These Parameters?
[Section titled “Why Cache These Parameters?”](#why-cache-these-parameters)
Without caching, your application would need to request these parameters from the network before every transaction, which:
* **Increases latency:** Each transaction would require an additional network request
* **Increases network load:** Both for your application and the Algorand node
* **Slows down user experience:** Especially when creating multi-transaction groups
Since these parameters only change every few seconds (when new blocks are created), repeatedly requesting them wastes resources.
##### How Parameter Caching Works
[Section titled “How Parameter Caching Works”](#how-parameter-caching-works)
The `AlgorandClient` automatically:
1. Requests suggested parameters when needed
2. Caches them for a configurable time period (default: 3 seconds)
3. Reuses the cached values for subsequent transactions
4. Refreshes the cache when it expires
##### Customized Parameter Caching
[Section titled “Customized Parameter Caching”](#customized-parameter-caching)
`AlgorandClient` has a set of default configurations that control this behavior, but you have the ability to override and change the configuration of this behavior:
* `algorand.setDefaultValidityWindow(validityWindow)` - Set the default validity window (number of rounds from the current known round that the transaction will be valid to be accepted for), having a smallish value for this is usually ideal to avoid transactions that are valid for a long future period and may be submitted even after you think it failed to submit if waiting for a particular number of rounds for the transaction to be successfully submitted. The validity window defaults to 10, except in automated testing where it’s set to 1000 when targeting LocalNet.
* `algorand.setSuggestedParams(suggestedParams, until?)` - Set the suggested network parameters to use (optionally until the given time)
* `algorand.setSuggestedParamsTimeout(timeout)` - Set the timeout that is used to cache the suggested network parameters (by default 3 seconds)
* `algorand.getSuggestedParams()` - Get the current suggested network parameters object, either the cached value, or if the cache has expired a fresh value
- TypeScript
```ts
/*
* Sets the default validity window for transactions.
* @param validityWindow The number of rounds between the first and last valid rounds
* @returns The `algorand` so method calls can be chained
*/
algorand.setDefaultValidityWindow(1000)
/*
* Get suggested params for a transaction (either cached or from algod if the cache is stale or empty)
*/
const sp = await algorand.getSuggestedParams()
// The suggested params can be modified like below
sp.flatFee = true
sp.fee = 2000
/*
* Sets a cache value to use for suggested params. Use this method to use modified suggested params for
* the next transaction.
* @param suggestedParams The suggested params to use
* @param until A timestamp until which to cache, or if not specified then the timeout is used
* @returns The `algorand` so method calls can be chained
*/
algorand.setSuggestedParamsCache(sp)
/*
* Sets the timeout for caching suggested params. If set to 0, the Algorand client
* will request suggested params from the algod client every time.
* @param timeout The timeout in milliseconds
* @returns The `algorand` so method calls can be chained
*/
algorand.setSuggestedParamsCacheTimeout(0)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L111)
- Python
```py
"""
Sets the default validity window for transactions.
:param validity_window: The number of rounds between the first and last valid rounds
:return: The `AlgorandClient` so method calls can be chained
"""
algorand_client.set_default_validity_window(1000)
"""
Get suggested params for a transaction (either cached or from algod if the cache is stale or empty)
"""
sp = algorand_client.get_suggested_params()
# The suggested params can be modified like below
sp.flat_fee = True
sp.fee = 2000
"""
Sets a cache value to use for suggested params. Use this method to use modified suggested params for
the next transaction.
:param suggested_params: The suggested params to use
:param until: A timestamp until which to cache, or if not specified then the timeout is used
:return: The `AlgorandClient` so method calls can be chained
"""
algorand_client.set_suggested_params_cache(sp)
"""
Sets the timeout for caching suggested params. If set to 0, the Algorand client
will request suggested params from the algod client every time.
:param timeout: The timeout in milliseconds
:return: The `AlgorandClient` so method calls can be chained
"""
algorand_client.set_suggested_params_cache_timeout(0)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L109)
When to Adjust Parameter Caching
* **Building time-sensitive applications:** Reduce the validity window for transactions that shouldn’t remain pending for long
* **Developing high-throughput services:** Increase the cache timeout to reduce network requests
* **Testing transaction behavior:** Disable caching to ensure fresh parameters for each test
By understanding and properly configuring suggested parameter caching, you can optimize your application’s performance while ensuring transactions are processed correctly by the Algorand network.
## Typed App Clients: Smart Contract Interaction Simplified
[Section titled “Typed App Clients: Smart Contract Interaction Simplified”](#typed-app-clients-smart-contract-interaction-simplified)
While the `AlgorandClient` handles general blockchain interactions, typed app clients provide specialized interfaces for deployed applications. These clients are generated from contract specifications ([ARC-56](/arc-standards/arc-0056)/[ARC-32](/arc-standards/arc-0032)) and offer:
* Type-safe method calls
* Automatic parameter validation
* IntelliSense code completion support
Note
Typed app clients are the recommended way to interact with smart contracts. However, you have alternatives based on your situation. If you have an *ARC-56* or *ARC-32* app specification but prefer not to use typed clients, you can still use non-typed application clients. For smart contracts without any app specification, you’ll need to use the underlying app management and deployment functionality to manually construct your transactions.
### Generating App Clients
[Section titled “Generating App Clients”](#generating-app-clients)
The relevant smart contract’s app client is generated using the *ARC56/ARC32* ABI file. There are two different ways to generate an application client for a smart contract:
#### 1. Using the AlgoKit Build CLI Command
[Section titled “1. Using the AlgoKit Build CLI Command”](#1-using-the-algokit-build-cli-command)
When you are using the AlgoKit smart contract template for your project, compiling your *ARC4* smart contract written in either TypeScript or Python will automatically generate the TypeScript or Python application client for you depending on what language you chose for contract interaction. Simply run the following command to generate the artifacts including the typed application client:
```shell
algokit project run build
```
After running the command, you should see the following artifacts generated in the `artifacts` directory under the `smart_contracts` directory:
* hello\_world
* hello\_world\_client.py
* HelloWorld.approval.puya.map
* HelloWorld.approval.teal
* HelloWorld.arc56.json
* HelloWorld.clear.puya.map
* HelloWorld.clear.puya.teal
#### 2. Using the AlgoKit Generate CLI Command
[Section titled “2. Using the AlgoKit Generate CLI Command”](#2-using-the-algokit-generate-cli-command)
There is also an AlgoKit CLI command to generate the app client for a smart contract. You can also use it to define custom commands inside of the `.algokit.toml` file in your project directory. Note that you can specify what language you want for the application clients with the file extensions `.ts` for TypeScript and `.py` for Python.
```shell
# To output a single arc32.json to a TypeScript typed app client:
algokit generate client path/to/arc32.json --output client.ts
# To process multiple arc32.json in a directory structure and output to a TypeScript app client for each in the current directory:
algokit generate client smart_contracts/artifacts --output {contract_name}.ts
# To process multiple arc32.json in a directory structure and output to a Python client alongside each arc32.json:
algokit generate client smart_contracts/artifacts --output {app_spec_path}/client.py
```
When compiled, all *ARC-4* smart contracts generate an `arc56.json` or `arc32.json` file depending on what app spec was used. This file contains the smart contract’s extended ABI, which follows the *ARC-32* standard.
### Working with a Typed App Client Object
[Section titled “Working with a Typed App Client Object”](#working-with-a-typed-app-client-object)
To get an instance of a typed client you can use an `AlgorandClient` instance or a typed app `Factory` instance.
The approach to obtaining a client instance depends on how many app clients you require for a given app spec and if the app has already been deployed, which is summarised below:
#### App is Already Deployed
[Section titled “App is Already Deployed”](#app-is-already-deployed)
* TypeScript
```ts
/*
Get typed app client by id
*/
//For single app client instance
let appClient = await algorand.client.getTypedAppClientById(HelloWorldClient, {
appId: 1234n,
})
// or
appClient = new HelloWorldClient({
algorand,
appId: 1234n,
})
// For multiple app client instances use the factory
const factory = algorand.client.getTypedAppFactory(HelloWorldFactory)
// or
const factory2 = new HelloWorldFactory({ algorand })
const appClient1 = await factory.getAppClientById({ appId: 1234n })
const appClient2 = await factory.getAppClientById({ appId: 4321n })
/*
Get typed app client by creator and name
*/
// For single app client instance
let appClientByCreator = await algorand.client.getTypedAppClientByCreatorAndName(HelloWorldClient, {
creatorAddress: randomAccountA.addr,
appName: 'contract-name',
// ...
})
// or
appClientByCreator = await HelloWorldClient.fromCreatorAndName({
algorand,
creatorAddress: randomAccountA.addr,
appName: 'contract-name',
// ...
})
// For multiple app client instances use the factory
let appClientFactory = algorand.client.getTypedAppFactory(HelloWorldFactory)
// or
appClientFactory = new HelloWorldFactory({ algorand })
const appClientByCreator1 = await appClientFactory.getAppClientByCreatorAndName({
creatorAddress: randomAccountA.addr,
appName: 'contract-name',
// ...
})
const appClientByCreator2 = await appClientFactory.getAppClientByCreatorAndName({
creatorAddress: randomAccountA.addr,
appName: 'contract-name-2',
// ...
})
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L150)
* Python
```py
from smart_contracts.artifacts.hello_world.hello_world_client import (
HelloArgs,
HelloWorldClient,
HelloWorldFactory,
)
"""
Get a single typed app client by id
"""
app_client = algorand_client.client.get_typed_app_client_by_id(
HelloWorldClient,
app_id=1234,
)
# or
app_client = HelloWorldClient(
algorand=algorand_client,
app_id=1234,
)
"""
For multiple app client instances use the factory
"""
factory = algorand_client.client.get_typed_app_factory(HelloWorldFactory)
# or
factory = HelloWorldFactory(algorand_client)
app_client1 = factory.get_app_client_by_id(
app_id=1234,
)
app_client2 = factory.get_app_client_by_id(
app_id=4321,
)
"""
Get typed app client by creator and name
"""
app_client = algorand_client.client.get_typed_app_client_by_creator_and_name(
HelloWorldClient,
creator_address=account_a.address,
app_name="contract-name",
# ...
)
# or
app_client = HelloWorldClient.from_creator_and_name(
algorand=algorand_client,
creator_address=account_a.address,
app_name="contract-name",
# ...
)
"""
For multiple app client instances use the factory
"""
factory = algorand_client.client.get_typed_app_factory(HelloWorldFactory)
# or
factory = HelloWorldFactory(algorand_client)
app_client1 = factory.get_app_client_by_creator_and_name(
creator_address="CREATORADDRESS",
app_name="contract-name",
# ...
)
app_client2 = factory.get_app_client_by_creator_and_name(
creator_address="CREATORADDRESS",
app_name="contract-name-2",
# ...
)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L155)
#### App is not Deployed
[Section titled “App is not Deployed”](#app-is-not-deployed)
For applications that need to work with multiple instances of the same smart contract spec, factories provide a convenient way to manage multiple clients:
* TypeScript
```ts
/*
* Deploy a New App
*/
let createFactory = algorand.client.getTypedAppFactory(HelloWorldFactory)
// or
createFactory = new HelloWorldFactory({ algorand })
const { result, appClient: newAppClient } = await createFactory.send.create.bare()
// or if the contract has a custom create method:
const customFactory = algorand.client.getTypedAppFactory(CustomCreateFactory)
const { result: customCreateResult, appClient: customCreateAppClient } = await customFactory.send.create.customCreate(
{ args: { age: 28 } },
)
// Deploy or Resolve App Idempotently by Creator and Name
const { result: deployResult, appClient: deployedClient } = await createFactory.deploy({
appName: 'contract-name',
})
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L209)
* Python
```py
from smart_contracts.artifacts.custom_create.custom_create_client import (
CustomCreateArgs,
CustomCreateFactory,
)
"""
Deploy a New App
"""
factory = algorand_client.client.get_typed_app_factory(HelloWorldFactory)
# or
factory = HelloWorldFactory(algorand_client)
app_client, create_response = factory.send.create.bare()
# or if the contract has a custom create method:
factory2 = algorand_client.client.get_typed_app_factory(CustomCreateFactory)
custom_create_app_client, factory_create_response = (
factory2.send.create.custom_create(CustomCreateArgs(age=28))
)
"""
Deploy or Resolve App Idempotently by Creator and Name
"""
app_client, deploy_response = factory.deploy(
app_name="contract-name",
)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L226)
### Calling a Smart Contract Method
[Section titled “Calling a Smart Contract Method”](#calling-a-smart-contract-method)
To call a smart contract method using the application client instance, follow these steps:
* TypeScript
```ts
const methodResponse = await appClient.send.sayHello({ args: { firstName: 'there', lastName: 'world' } })
console.log(methodResponse.return)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L232)
* Python
```py
response = app_client.send.hello(args=HelloArgs(name="world"))
print(response.abi_return)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L256)
The typed app client ensures you provide the correct parameters and handles all the underlying transaction construction and submission.
### Example: Deploying and Interacting with a Smart Contract
[Section titled “Example: Deploying and Interacting with a Smart Contract”](#example-deploying-and-interacting-with-a-smart-contract)
For a simple example that deploys a contract and calls a `hello` method, see below:
* TypeScript
```ts
// A similar working example can be seen in the AlgoKit init production smart contract templates
// In this case the generated factory is called `HelloWorldAppFactory` and is accessible via AppClients
// These require environment variables to be present, or it will retrieve from default LocalNet
const algorand = AlgorandClient.fromEnvironment()
const deployer = await algorand.account.fromEnvironment('DEPLOYER', (1).algo())
// Create the typed app factory
const factory = algorand.client.getTypedAppFactory(HelloWorldFactory, {
defaultSender: deployer.addr,
})
// Create the app and get a typed app client for the created app (note: this creates a new instance of the app every time,
// you can use .deploy() to deploy idempotently if the app wasn't previously
// deployed or needs to be updated if that's allowed)
const { appClient } = await factory.send.create.bare()
// Make a call to an ABI method and print the result
const response = await appClient.send.sayHello({ args: { firstName: 'there', lastName: 'world' } })
console.log(response.return)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/typescript-examples/algokit-utils-ts/algorand-client.ts#L239)
* Python
```py
# A similar working example can be seen in the AlgoKit init production smart contract
# templates, when using Python deployment
# In this case the generated factory is called `HelloWorldAppFactory` and is in
# `./artifacts/HelloWorldApp/client.py`
from algokit_utils import AlgorandClient
from smart_contracts.artifacts.hello_world.hello_world_client import (
HelloArgs,
HelloWorldClient,
HelloWorldFactory,
)
# These require environment variables to be present, or it will retrieve from default LocalNet
algorand = AlgorandClient.from_environment()
deployer = algorand.account.from_environment("DEPLOYER", AlgoAmount.from_algo(1))
# Create the typed app factory
factory = algorand.client.get_typed_app_factory(
HelloWorldFactory,
default_sender=deployer.address,
)
# Create the app and get a typed app client for the created app
# (note: this creates a new instance of the app every time,
# you can use .deploy() to deploy idempotently if the app wasn't previously
# deployed or needs to be updated if that's allowed)
app_client, create_response = factory.send.create.bare()
# Make a call to an ABI method and print the result
response = app_client.send.hello(args=HelloArgs(name="world"))
print(response.abi_return)
```
[ Source](https://github.com/algorandfoundation/devportal-code-examples/blob/refs/heads/main/projects/python-examples/algokit_utils_py_examples/algorand_client.py#L262)
## When to Use Each Client Type
[Section titled “When to Use Each Client Type”](#when-to-use-each-client-type)
* Use the `AlgorandClient` when you need to:
* Send basic transactions (payments, asset transfers)
* Work with blockchain data in a general way
* Interact with contracts you don’t have specifications for
* Use Typed App Clients when you need to:
* Deploy and interact with specific smart contracts
* Benefit from type safety and IntelliSense
* Build applications that leverage contract-specific functionality
For most Algorand applications, you’ll likely use both: `AlgorandClient` for general blockchain operations and Typed App Clients for smart contract interactions.
## Next Steps
[Section titled “Next Steps”](#next-steps)
Now that you understand AlgoKit Utils Clients, you’re ready to start building on Algorand with confidence. Remember:
* Start with the AlgorandClient for general blockchain interactions
* Generate Typed Application Clients for your smart contracts
* Leverage the stateful design of these clients to simplify your code
# Algorand ARCs
> To discuss ARC drafts, use the corresponding issue in the issue tracker.
Welcome to the Algorand ARCs (Algorand Request for Comments) page. Here you’ll find information on Algorand Improvement Proposals. New ideas for ARCs are discussed through [Pull Requests ](https://github.com/algorandfoundation/ARCs/pulls)— you can find and contribute to them in the [ARC repository ](https://github.com/algorandfoundation/ARCs).
## Living Arcs
| Number | Title | Description |
| ------------------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| [0](/arc-standards/arc-0000/) | [ARC Purpose and Guidelines](/arc-standards/arc-0000/) | [Guide explaining how to write a new ARC](/arc-standards/arc-0000/) |
| [72](/arc-standards/arc-0072/) | [Algorand Smart Contract NFT Specification](/arc-standards/arc-0072/) | [Base specification for non-fungible tokens implemented as smart contracts.](/arc-standards/arc-0072/) |
| [83](/arc-standards/arc-0083/) | [xGov Council - Application Process](/arc-standards/arc-0083/) | [How to run for an xGov Council seat.](/arc-standards/arc-0083/) |
| [200](/arc-standards/arc-0200/) | [Algorand Smart Contract Token Specification](/arc-standards/arc-0200/) | [Base specification for tokens implemented as smart contracts](/arc-standards/arc-0200/) |
## Final Arcs
| Number | Title | Description |
| ------------------------------ | ----------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [1](/arc-standards/arc-0001/) | [Algorand Wallet Transaction Signing API](/arc-standards/arc-0001/) | [An API for a function used to sign a list of transactions.](/arc-standards/arc-0001/) |
| [2](/arc-standards/arc-0002/) | [Algorand Transaction Note Field Conventions](/arc-standards/arc-0002/) | [Conventions for encoding data in the note field at application-level](/arc-standards/arc-0002/) |
| [3](/arc-standards/arc-0003/) | [Conventions Fungible/Non-Fungible Tokens](/arc-standards/arc-0003/) | [Parameters Conventions for Algorand Standard Assets (ASAs) for fungible tokens and non-fungible tokens (NFTs).](/arc-standards/arc-0003/) |
| [4](/arc-standards/arc-0004/) | [Application Binary Interface (ABI)](/arc-standards/arc-0004/) | [Conventions for encoding method calls in Algorand Application](/arc-standards/arc-0004/) |
| [5](/arc-standards/arc-0005/) | [Wallet Transaction Signing API (Functional)](/arc-standards/arc-0005/) | [An API for a function used to sign a list of transactions.](/arc-standards/arc-0005/) |
| [16](/arc-standards/arc-0016/) | [Convention for declaring traits of an NFT's](/arc-standards/arc-0016/) | [This is a convention for declaring traits in an NFT's metadata.](/arc-standards/arc-0016/) |
| [18](/arc-standards/arc-0018/) | [Royalty Enforcement Specification](/arc-standards/arc-0018/) | [An ARC to specify the methods and mechanisms to enforce Royalty payments as part of ASA transfers](/arc-standards/arc-0018/) |
| [19](/arc-standards/arc-0019/) | [Templating of NFT ASA URLs for mutability](/arc-standards/arc-0019/) | [Templating mechanism of the URL so that changeable data in an asset can be substituted by a client, providing a mutable URL.](/arc-standards/arc-0019/) |
| [20](/arc-standards/arc-0020/) | [Smart ASA](/arc-standards/arc-0020/) | [An ARC for an ASA controlled by an Algorand Smart Contract](/arc-standards/arc-0020/) |
| [21](/arc-standards/arc-0021/) | [Round based datafeed oracles on Algorand](/arc-standards/arc-0021/) | [Conventions for building round based datafeed oracles on Algorand](/arc-standards/arc-0021/) |
| [22](/arc-standards/arc-0022/) | [Add \`read-only\` annotation to ABI methods](/arc-standards/arc-0022/) | [Convention for creating methods which don't mutate state](/arc-standards/arc-0022/) |
| [23](/arc-standards/arc-0023/) | [Sharing Application Information](/arc-standards/arc-0023/) | [Append application information to compiled TEAL applications](/arc-standards/arc-0023/) |
| [25](/arc-standards/arc-0025/) | [Algorand WalletConnect v1 API](/arc-standards/arc-0025/) | [API for communication between Dapps and wallets using WalletConnect](/arc-standards/arc-0025/) |
| [27](/arc-standards/arc-0027/) | [Provider Message Schema](/arc-standards/arc-0027/) | [A comprehensive message schema for communication between clients and providers.](/arc-standards/arc-0027/) |
| [28](/arc-standards/arc-0028/) | [Algorand Event Log Spec](/arc-standards/arc-0028/) | [A methodology for structured logging by Algorand dapps.](/arc-standards/arc-0028/) |
| [32](/arc-standards/arc-0032/) | [Application Specification](/arc-standards/arc-0032/) | [A specification for fully describing an Application, useful for Application clients.](/arc-standards/arc-0032/) |
| [35](/arc-standards/arc-0035/) | [Algorand Offline Wallet Backup Protocol](/arc-standards/arc-0035/) | [Wallet-agnostic backup protocol for multiple accounts](/arc-standards/arc-0035/) |
| [36](/arc-standards/arc-0036/) | [Convention for declaring filters of an NFT](/arc-standards/arc-0036/) | [This is a convention for declaring filters in an NFT metadata](/arc-standards/arc-0036/) |
| [47](/arc-standards/arc-0047/) | [Logic Signature Templates](/arc-standards/arc-0047/) | [Defining templated logic signatures so wallets can safely sign them.](/arc-standards/arc-0047/) |
| [54](/arc-standards/arc-0054/) | [ASA Burning App](/arc-standards/arc-0054/) | [Standardized Application for Burning ASAs](/arc-standards/arc-0054/) |
| [55](/arc-standards/arc-0055/) | [On-Chain storage/transfer for Multisig](/arc-standards/arc-0055/) | [A smart contract that stores transactions and signatures for simplified multisignature use on Algorand.](/arc-standards/arc-0055/) |
| [56](/arc-standards/arc-0056/) | [Extended App Description](/arc-standards/arc-0056/) | [Adds information to the ABI JSON description](/arc-standards/arc-0056/) |
| [59](/arc-standards/arc-0059/) | [ASA Inbox Router](/arc-standards/arc-0059/) | [An application that can route ASAs to users or hold them to later be claimed](/arc-standards/arc-0059/) |
| [62](/arc-standards/arc-0062/) | [ASA Circulating Supply](/arc-standards/arc-0062/) | [Getter method for ASA circulating supply](/arc-standards/arc-0062/) |
| [65](/arc-standards/arc-0065/) | [AVM Run Time Errors In Program](/arc-standards/arc-0065/) | [Informative AVM run time errors based on program bytecode](/arc-standards/arc-0065/) |
| [69](/arc-standards/arc-0069/) | [ASA Parameters Conventions, Digital Media](/arc-standards/arc-0069/) | [Alternatives conventions for ASAs containing digital media.](/arc-standards/arc-0069/) |
| [71](/arc-standards/arc-0071/) | [Non-Transferable ASA](/arc-standards/arc-0071/) | [Parameters Conventions Non-Transferable Algorand Standard Asset](/arc-standards/arc-0071/) |
| [73](/arc-standards/arc-0073/) | [Algorand Interface Detection Spec](/arc-standards/arc-0073/) | [A specification for smart contracts and indexers to detect interfaces of smart contracts.](/arc-standards/arc-0073/) |
| [74](/arc-standards/arc-0074/) | [NFT Indexer API](/arc-standards/arc-0074/) | [REST API for reading data about Application's NFTs.](/arc-standards/arc-0074/) |
| [86](/arc-standards/arc-0086/) | [xGov status and voting power](/arc-standards/arc-0086/) | [xGov status and voting power for the Algorand Governance](/arc-standards/arc-0086/) |
| [90](/arc-standards/arc-0090/) | [URI scheme](/arc-standards/arc-0090/) | [Consolidated specification for encoding Algorand transactions and queries as URIs.](/arc-standards/arc-0090/) |
## Last Call Arcs
| Number | Title | Description |
| ------------------------------ | ------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| [53](/arc-standards/arc-0053/) | [Metadata Declarations](/arc-standards/arc-0053/) | [A specification for a decentralized, Self-declared, & Verifiable Tokens, Collections, & Metadata](/arc-standards/arc-0053/) |
| [89](/arc-standards/arc-0089/) | [ASA Metadata Registry](/arc-standards/arc-0089/) | [Singleton Application providing ASA metadata via Algod API or the AVM](/arc-standards/arc-0089/) |
## Withdrawn Arcs
| Number | Title | Description |
| ------------------------------ | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [12](/arc-standards/arc-0012/) | [Claimable ASA from vault application](/arc-standards/arc-0012/) | [A smart signature contract account that can receive & disburse claimable Algorand Smart Assets (ASA) to an intended recipient account.](/arc-standards/arc-0012/) |
## Deprecated Arcs
| Number | Title | Description |
| ------------------------------ | ---------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| [6](/arc-standards/arc-0006/) | [Algorand Wallet Address Discovery API](/arc-standards/arc-0006/) | [API function, enable, which allows the discovery of accounts](/arc-standards/arc-0006/) |
| [7](/arc-standards/arc-0007/) | [Algorand Wallet Post Transactions API](/arc-standards/arc-0007/) | [API function to Post Signed Transactions to the network.](/arc-standards/arc-0007/) |
| [8](/arc-standards/arc-0008/) | [Algorand Wallet Sign and Post API](/arc-standards/arc-0008/) | [A function used to simultaneously sign and post transactions to the network.](/arc-standards/arc-0008/) |
| [9](/arc-standards/arc-0009/) | [Algorand Wallet Algodv2 and Indexer API](/arc-standards/arc-0009/) | [An API for accessing Algod and Indexer through a user's preferred connection.](/arc-standards/arc-0009/) |
| [10](/arc-standards/arc-0010/) | [Algorand Wallet Reach Minimum Requirements](/arc-standards/arc-0010/) | [Minimum requirements for Reach to function with a given wallet.](/arc-standards/arc-0010/) |
| [11](/arc-standards/arc-0011/) | [Algorand Wallet Reach Browser Spec](/arc-standards/arc-0011/) | [Convention for DApps to discover Algorand wallets in browser](/arc-standards/arc-0011/) |
| [15](/arc-standards/arc-0015/) | [Encrypted Short Messages](/arc-standards/arc-0015/) | [Scheme for encryption/decryption that allows for private messages.](/arc-standards/arc-0015/) |
| [26](/arc-standards/arc-0026/) | [URI scheme](/arc-standards/arc-0026/) | [A specification for encoding Transactions in a URI format.](/arc-standards/arc-0026/) |
| [33](/arc-standards/arc-0033/) | [xGov Pilot - Becoming an xGov](/arc-standards/arc-0033/) | [Explanation on how to become Expert Governors.](/arc-standards/arc-0033/) |
| [34](/arc-standards/arc-0034/) | [xGov Pilot - Proposal Process](/arc-standards/arc-0034/) | [Criteria for the creation of proposals.](/arc-standards/arc-0034/) |
| [42](/arc-standards/arc-0042/) | [xGov Pilot - Integration](/arc-standards/arc-0042/) | [Integration of xGov Process](/arc-standards/arc-0042/) |
| [48](/arc-standards/arc-0048/) | [Targeted DeFi Rewards](/arc-standards/arc-0048/) | [Targeted DeFi Rewards, Terms and Conditions](/arc-standards/arc-0048/) |
| [49](/arc-standards/arc-0049/) | [NFT Rewards](/arc-standards/arc-0049/) | [NFT Rewards, Terms and Conditions](/arc-standards/arc-0049/) |
| [78](/arc-standards/arc-0078/) | [URI scheme, keyreg Transactions extension](/arc-standards/arc-0078/) | [A specification for encoding Key Registration Transactions in a URI format.](/arc-standards/arc-0078/) |
| [79](/arc-standards/arc-0079/) | [URI scheme, App NoOp call extension](/arc-standards/arc-0079/) | [A specification for encoding NoOp Application call Transactions in a URI format.](/arc-standards/arc-0079/) |
| [82](/arc-standards/arc-0082/) | [URI scheme blockchain information](/arc-standards/arc-0082/) | [Querying blockchain information using a URI format](/arc-standards/arc-0082/) |
## Draft Arcs
| Number | Title | Description |
| ------------------------------ | ----------------------------------------------------------------- | ----------------------------------------------------------- |
| [60](/arc-standards/arc-0060/) | [Algorand Wallet Arbitrary Signing API](/arc-standards/arc-0060/) | [API function for signing data](/arc-standards/arc-0060/) |
| [87](/arc-standards/arc-0087/) | [Key Name Specification](/arc-standards/arc-0087/) | [A system for addressable values](/arc-standards/arc-0087/) |
## ARC Status Terms
[Section titled “ARC Status Terms”](#arc-status-terms)
* **Idea** - An idea that is pre-draft. This is not tracked within the ARC Repository.
* **Draft** - The first formally tracked stage of an ARC in development. An ARC is merged by an ARC Editor into the ARC repository when properly formatted.
* **Review** - An ARC Author marks an ARC as ready for and requesting Peer Review.
* **Last Call** - This is the final review window for an ARC before moving to FINAL. An ARC editor will assign Last Call status and set a review end date (\`last-call-deadline\`), typically 14 days later. If this period results in necessary normative changes it will revert the ARC to Review.
* **Final** - This ARC represents the final standard. A Final ARC exists in a state of finality and should only be updated to correct errata and add non-normative clarifications.
* **Stagnant** - Any ARC in Draft or Review if inactive for a period of 6 months or greater is moved to Stagnant. An ARC may be resurrected from this state by Authors or ARC Editors through moving it back to Draft.
* **Withdrawn** - The ARC Author(s) have withdrawn the proposed ARC. This state has finality and can no longer be resurrected using this ARC number. If the idea is pursued at a later date it is considered a new proposal.
* **Deprecated** - This ARC has been deprecated. It has been replaced by another one or is now obsolete.
* **Living** - A special status for ARCs that are designed to be continually updated and not reach a state of finality.
# ARC Purpose and Guidelines
> Guide explaining how to write a new ARC
## Abstract
[Section titled “Abstract”](#abstract)
### What is an ARC?
[Section titled “What is an ARC?”](#what-is-an-arc)
ARC stands for Algorand Request for Comments. An ARC is a design document providing information to the Algorand community or describing a new feature for Algorand or its processes or environment. The ARC should provide a concise technical specification and a rationale for the feature. The ARC author is responsible for building consensus within the community and documenting dissenting opinions.
We intend ARCs to be the primary mechanisms for proposing new features and collecting community technical input on an issue. We maintain ARCs as text files in a versioned repository. Their revision history is the historical record of the feature proposal.
## Specification
[Section titled “Specification”](#specification)
### ARC Types
[Section titled “ARC Types”](#arc-types)
There are three types of ARC:
* A **Standards track ARC**: application-level standards and conventions, including contract standards such as NFT standards, Algorand ABI, URI schemes, library/package formats, and wallet formats.
* A **Meta ARC** describes a process surrounding Algorand or proposes a change to (or an event in) a process. Process ARCs are like Standards track ARCs but apply to areas other than the Algorand protocol. They may propose an implementation, but not to Algorand’s codebase; they often require community consensus; unlike Informational ARCs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Algorand development. Any meta-ARC is also considered a Process ARC.
* An **Informational ARC** describes an Algorand design issue or provides general guidelines or information to the Algorand community but does not propose a new feature. Informational ARCs do not necessarily represent Algorand community consensus or a recommendation, so users and implementers are free to ignore Informational ARCs or follow their advice.
We recommend that a single ARC contains a single key proposal or new idea. The more focused the ARC, the more successful it tends to be. A change to one client does not require an ARC; a change that affects multiple clients, or defines a standard for multiple apps to use, does.
An ARC must meet specific minimum criteria. It must be a clear and complete description of the proposed enhancement. The enhancement must represent a net improvement. If applicable, the proposed implementation must be solid and not complicate the protocol unduly.
### Shepherding an ARC
[Section titled “Shepherding an ARC”](#shepherding-an-arc)
Parties involved in the process are you, the champion or *ARC author*, the [*ARC editors*](#arc-editors), the [*Algorand Core Developers*](https://github.com/orgs/algorand/people), and the [*Algorand Foundation Team*](https://github.com/orgs/algorandfoundation/people).
Before writing a formal ARC, you should vet your idea. Ask the Algorand community first if an idea is original to avoid wasting time on something that will be rejected based on prior research. You **MUST** open an issue on the [Algorand ARC Github Repository](https://github.com/algorandfoundation/ARCs/issues) to do this. You **SHOULD** also share the idea on the [Algorand Discord #arcs chat room](https://discord.gg/algorand).
Once the idea has been vetted, your next responsibility will be to create a [pull request](https://github.com/algorandfoundation/ARCs/pulls) to present (through an ARC) the idea to the reviewers and all interested parties and invite editors, developers, and the community to give feedback on the aforementioned issue.
The pull request with the **DRAFT** status **MUST**:
* Have been vetted on the forum.
* Be editable by ARC Editors; it will be closed otherwise.
You should try and gauge whether the interest in your ARC is commensurate with both the work involved in implementing it and how many parties will have to conform to it. Negative community feedback will be considered and may prevent your ARC from moving past the Draft stage.
To facilitate the discussion between each party involved in an ARC, you **SHOULD** use the specific [channel in the Algorand Discord](https://discord.com/channels/491256308461207573/1011541977189326852).
The ARC author is in charge of creating the PR and changing the status to **REVIEW**.
The pull request with the **REVIEW** status **MUST**:
* Contain a reference implementation.
* Have garnered the interest of multiple projects; it will be set to **STAGNANT** otherwise.
To update the status of an ARC from **REVIEW** to **LAST CALL**, a discussion will occur with all parties involved in the process. Any stakeholder **SHOULD** implement the ARC to point out any flaws that might occur.
*In short, the role of a champion is to write the ARC using the style and format described below, shepherd the discussions in the appropriate forums, build community consensus around the idea, and gather projects with similar needs who will implement it.*
### ARC Process
[Section titled “ARC Process”](#arc-process)
The following is the standardization process for all ARCs in all tracks:

**Idea** - An idea that is pre-draft. This is not tracked within the ARC Repository.
**Draft** - The first formally tracked stage of an ARC in development. An ARC is merged by an ARC Editor into the ARC repository when adequately formatted.
**Review** - An ARC Author marks an ARC as ready for and requests Peer Review.
**Last Call** - The final review window for an ARC before moving to `FINAL`. An ARC editor will assign `Last Call` status and set a review end date (last-call-deadline), typically 1 month later.
If this period results in necessary normative change, it will revert the ARC to `REVIEW`.
**Final** - This ARC represents the final standard. A Final ARC exists in a state of finality and should only be updated to correct errata and add non-normative clarifications.
**Stagnant** - Any ARC in `DRAFT`,`REVIEW` or `LAST CALL`, if inactive for 6 months or greater, is moved to `STAGNANT`. An ARC may be resurrected from this state by Authors or ARC Editors by moving it back to `DRAFT`.
> An ARC with the status **STAGNANT** which does not have any activity for 1 month will be closed. *ARC Authors are notified of any algorithmic change to the status of their ARC*
**Withdrawn** - The ARC Author(s)/Editor(s) has withdrawn the proposed ARC. This state has finality and can no longer be resurrected using this ARC number. If the idea is pursued later, it is considered a new proposal.
**Idle** - Any ARC in `FINAL` or `LIVING`, if it has not been widely adopted by the ecosystem within 12 months. It will be moved to `DEPRECATED` after 6 months of `IDLE`. And can go back to `FINAL` or `LIVING` if the adoption starts.
**Living** - A special status for ARCs which, by design, will be continually updated and **MIGHT** not reach a state of finality.
**Deprecated** - A status for ARCs that are no longer aligned with our ecosystem or have been superseded by another ARC.
### What belongs in a successful ARC?
[Section titled “What belongs in a successful ARC?”](#what-belongs-in-a-successful-arc)
Each ARC should have the following parts:
* Preamble - [RFC 822](https://www.rfc-editor.org/rfc/rfc822) style headers containing metadata about the ARC, including the ARC number, a short descriptive title (limited to a maximum of 44 characters), a description (limited to a maximum of 140 characters), and the author details. Irrespective of the category, the title and description should not include ARC numbers. See [below](/arc-standards/arc-0000#arc-header-preamble) for details.
* Abstract - This is a multi-sentence (short paragraph) technical summary. It should be a very terse and human-readable version of the specification section. Someone should be able to read only the abstract to get the gist of what this specification does.
* Specification - The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Algorand clients.
* Rationale - The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g., how the feature is supported in other languages. The rationale may also provide evidence of consensus within the community and should discuss significant objections or concerns raised during discussions.
* Backwards Compatibility - All ARCs that introduce backward incompatibilities must include a section describing these incompatibilities and their severity. The ARC must explain how the author proposes to deal with these incompatibilities. ARC submissions without a sufficient backward compatibility treatise may be rejected outright.
* Test Cases - Test cases for implementation are mandatory for ARCs that are affecting consensus changes. Tests should either be inlined in the ARC as data (such as input/expected output pairs, or included in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-###/`.
* Reference Implementation - An section that contains a reference/example implementation that people **MUST** use to assist in understanding or implementing this specification. If the reference implementation is too complex, the reference implementation **MUST** be included in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-###/`
* Security Considerations - All ARCs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks, and can be used throughout the life-cycle of the proposal. E.g., include security-relevant design decisions, concerns, essential discussions, implementation-specific guidance and pitfalls, an outline of threats and risks, and how they are being addressed. ARC submissions missing the “Security Considerations” section will be rejected. An ARC cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers.
* Copyright Waiver - All ARCs must be in the public domain. See the bottom of this ARC for an example copyright waiver.
### ARC Formats and Templates
[Section titled “ARC Formats and Templates”](#arc-formats-and-templates)
ARCs should be written in [markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) format. There is a [template](https://github.com/algorandfoundation/ARCs/blob/main/ARC-template.md) to follow.
### ARC Header Preamble
[Section titled “ARC Header Preamble”](#arc-header-preamble)
Each ARC must begin with an [RFC 822](https://www.ietf.org/rfc/rfc822.txt) style header preamble, preceded and followed by three hyphens (`---`). This header is also termed “front matter” by [Jekyll](https://jekyllrb.com/docs/front-matter/). The headers must appear in the following order. Headers marked with ”\*” are optional and are described below. All other headers are required.
`arc:` *ARC number* (It is determined by the ARC editor)
`title:` *The ARC title is a few words, not a complete sentence*
`description:` *Description is one full (short) sentence*
`author:` *A list of the author’s or authors’ name(s) and/or username(s), or name(s) and email(s). Details are below.*
> The `author` header lists the names, email addresses, or usernames of the authors/owners of the ARC. Those who prefer anonymity may use a username only or a first name and a username. The format of the `author` header value must be: Random J. User <> or Random J. User (@username) At least one author must use a GitHub username in order to get notified of change requests and can approve or reject them. `* discussions-to:` *A url pointing to the official discussion thread* While an ARC is in state `Idea`, a `discussions-to` header will indicate the URL where the ARC is being discussed. As mentioned above, an example of a place to discuss your ARC is the Algorand forum, but you can also use Algorand Discord #arcs chat room. When the ARC reach the state `Draft`, the `discussions-to` header will redirect to the discussion in [the Issues section of this repository](https://github.com/algorandfoundation/ARCs/issues).
`status:` *Draft, Review, Last Call, Final, Stagnant, Withdrawn, Living*
`* last-call-deadline:` *Date review period ends*
`type:` *Standards Track, Meta, or Informational*
`* category:` *Core, Networking, Interface, or ARC* (Only needed for Standards Track ARCs)
`* sub-category:` *General, Asa, Application, Explorer or Wallet* optional sub-category header, classifying the ARC into one of the following categories.
`created:` *Date created on*
> The `created` header records the date that the ARC was assigned a number. Both headers should be in yyyy-mm-dd format, e.g. 2001-08-14. `* updated:` *Comma separated list of dates* The `updated` header records the date(s) when the ARC was updated with “substantial” changes. This header is only valid for ARCs of Draft and Active status. `* requires:` *ARC number(s)* ARCs may have a `requires` header, indicating the ARC numbers that this ARC depends on. `* replaces:` *ARC number(s)* `* superseded-by:` *ARC number(s)* ARCs may also have a `superseded-by` header indicating that an ARC has been rendered obsolete by a later document; the value is the number of the ARC that replaces the current document. The newer ARC must have a `replaces` header containing the number of the ARC that it rendered obsolete.
> ARCs may also have an `extended-by` header indicating that functionalities have been added to the existing, still active ARC; the value is the number of the ARC that updates the current document. The newer ARC must have an `extends` header containing the number of the ARC that it extends.
`* resolution:` *A url pointing to the resolution of this ARC*
Headers that permit lists must separate elements with commas.
Headers requiring dates will always do so in the format of ISO 8601 (yyyy-mm-dd).
### Style Guide
[Section titled “Style Guide”](#style-guide)
When referring to an ARC by number, it should be written in the hyphenated form `ARC-X` where `X` is the ARC’s assigned number.
### Linking to other ARCs
[Section titled “Linking to other ARCs”](#linking-to-other-arcs)
References to other ARCs should follow the format `ARC-N`, where `N` is the ARC number you are referring to. Each ARC that is referenced in an ARC **MUST** be accompanied by a relative markdown link the first time it is referenced, and **MAY** be accompanied by a link on subsequent references. The link **MUST** always be done via relative paths so that the links work in this GitHub repository, forks of this repository, the main ARCs site, mirrors of the main ARC site, etc. For example, you would link to this ARC with `[ARC-0](./arc-0000.md)`.
### Auxiliary Files
[Section titled “Auxiliary Files”](#auxiliary-files)
Images, diagrams, and auxiliary files should be included in a subdirectory of the `assets` folder for that ARC as follows: `assets/arc-N` (where **N** is to be replaced with the ARC number). When linking to an image in the ARC, use relative links such as `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-1/image.png`.
### Application’s Methods name
[Section titled “Application’s Methods name”](#applications-methods-name)
To provide information about which ARCs has been implemented on a particular application, namespace with the ARC number should be used before every method name: `arc_methodName`.
> Where represents the specific ARC number associated to the standard.
eg:
```json
{
"name": "Method naming convention",
"desc": "Example",
"methods": [
{
"name": "arc0_method1",
"desc": "Method 1",
"args": [
{ "type": "uint64", "name": "Number", "desc": "A number" },
],
"returns": { "type": "void[]" }
},
{
"name": "arc0_method2",
"desc": "Method 2",
"args": [
{ "type": "byte[]", "name": "user_data", "desc": "Some characters" }
],
"returns": { "type": "void[]" }
}
]
}
```
### Application’s Event name
[Section titled “Application’s Event name”](#applications-event-name)
To provide information about which ARCs has been implemented on a particular application, namespace with the ARC number should be used before every [ARC-73](/arc-standards/arc-0073) event name: `arc_EventName`.
> Where represents the specific ARC number associated to the standard.
eg:
```json
{
"name": "Event naming convention",
"desc": "Example",
"events": [
{
"name": "arc0_Event1",
"desc": "Method 1",
"args": [
{ "type": "uint64", "name": "Number", "desc": "A number" },
]
},
{
"name": "arc0_Event2",
"desc": "Method 2",
"args": [
{ "type": "byte[]", "name": "user_data", "desc": "Some characters" }
]
}
]
}
```
## Rationale
[Section titled “Rationale”](#rationale)
This document was derived heavily from [Ethereum’s EIP-1](https://github.com/ethereum/eips), which was written by Martin Becze and Hudson Jameson, which in turn was derived from [Bitcoin’s BIP-0001](https://github.com/bitcoin/bips) written by Amir Taaki, which in turn was derived from [Python’s PEP-0001](https://www.python.org/dev/peps/). In many places, text was copied and modified. Although the PEP-0001 text was written by Barry Warsaw, Jeremy Hylton, and David Goodger, they are not responsible for its use in the Algorand Request for Comments. They should not be bothered with technical questions specific to Algorand or the ARC. Please direct all comments to the ARC editors.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
### Usage of related link
[Section titled “Usage of related link”](#usage-of-related-link)
Every link **SHOULD** be relative.
| OK | `[ARC-0](./arc-0000.md)` |
| :-- | -------------------------------------------------------------------------------: |
| NOK | `[ARC-0](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0000.md)` |
If you are using many links you **SHOULD** use this format:
### Usage of non-related link
[Section titled “Usage of non-related link”](#usage-of-non-related-link)
If for some reason (CCO, RFC …), you need to refer on something outside of the repository, you *MUST* you the following syntax
| OK | `ARCS` |
| :-- | --------------------------------------------------------------: |
| NOK | `[ARCS](https://github.com/algorandfoundation/ARCs)` |
### Transferring ARC Ownership
[Section titled “Transferring ARC Ownership”](#transferring-arc-ownership)
It occasionally becomes necessary to transfer ownership of ARCs to a new champion. In general, we would like to retain the original author as a co-author of the transferred ARC, but that is really up to the original author. A good reason to transfer ownership is that the original author no longer has the time or interest in updating it or following through with the ARC process or has fallen off the face of the ‘net (i.e., is unreachable or is not responding to email). A wrong reason to transfer ownership is that you disagree with the direction of the ARC. We try to build consensus around an ARC, but if that is not possible, you can always submit a competing ARC.
If you are interested in assuming ownership of an ARC, send a message asking to take over, addressed to both the original author and the ARC editor. If the original author does not respond to the email on time, the ARC editor will make a unilateral decision (it’s not like such decisions can’t be reversed :)).
### ARC Editors
[Section titled “ARC Editors”](#arc-editors)
The current ARC editor is:
* Stéphane Barroso (@sudoweezy)
### ARC Editor Responsibilities
[Section titled “ARC Editor Responsibilities”](#arc-editor-responsibilities)
For each new ARC that comes in, an editor does the following:
* Read the ARC to check if it is ready: sound and complete. The ideas must make technical sense, even if they do not seem likely to get to final status.
* The title should accurately describe the content.
* Check the ARC for language (spelling, grammar, sentence structure, etc.), markup (GitHub flavored Markdown), code style
If the ARC is not ready, the editor will send it back to the author for revision with specific instructions.
Once the ARC is ready for the repository, the ARC editor will:
* Assign an ARC number
* Create a living discussion in the Issues section of this repository
> The issue will be closed when the ARC reaches the status *Final* or *Withdrawn*
* Merge the corresponding pull request
* Send a message back to the ARC author with the next step.
The editors do not pass judgment on ARCs. We merely do the administrative & editorial part.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Transaction Signing API
> An API for a function used to sign a list of transactions.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this API is to propose a standard way for a dApp to request the signature of a list of transactions to an Algorand wallet. This document also includes detailed security requirements to reduce the risks of users being tricked to sign dangerous transactions. As the Algorand blockchain adds new features, these requirements may change.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Overview
[Section titled “Overview”](#overview)
> This overview section is non-normative.
After this overview, the syntax of the interfaces are described followed by the semantics and the security requirements.
At a high-level the API allows to sign:
* A valid group of transaction (aka atomic transfers).
* (**OPTIONAL**) A list of groups of transactions.
Signatures are requested by calling a function `signTxns(txns)` on a list `txns` of transactions. The dApp may also provide an optional parameter `opts`.
Each transaction is represented by a `WalletTransaction` object. The only required field of a `WalletTransaction` is `txn`, a base64 encoding of the canonical msgpack encoding of the unsigned transaction. There are three main use cases:
1. The transaction needs to be signed and the sender of the transaction is an account known by the wallet. This is the most common case. Example:
```json
{
"txn": "iaNhbXT..."
}
```
The wallet is free to generate the resulting signed transaction in any way it wants. In particular, the signature may be a multisig, may involve rekeying, or for very advanced wallets may use logicsigs.
> Remark: If the wallet uses a large logicsig to sign the transaction and there is congestion, the fee estimated by the dApp may be too low. A future standard may provide a wallet API allowing the dApp to compute correctly the estimated fee. Before such a standard, the dApp may need to retry with a higher fee when this issue arises.
2. The transaction does not need to be signed. This happens when the transaction is part of a group of transaction and is signed by another party or by a logicsig. In that case, the field `signers` is set to an empty array. Example:
```json
{
"txn": "iaNhbXT...",
"signers": []
}
```
3. (**OPTIONAL**) The transaction needs to be signed but the sender of the transaction is *not* an account known by the wallet. This happens when the dApp uses a sender account derived from one or more accounts of the wallet. For example, the sender account may be a multisig account with public keys corresponding to some accounts of the wallet, or the sender account may be rekeyed to an account of the wallet. Example:
```json
{
"txn": "iaNhbXT...",
"authAddr": "HOLQV2G65F6PFM36MEUKZVHK3XM7UEIFLG35UJGND77YDXHKXHKX4UXUQU",
"msig": {
"version": 1,
"threshold": 2,
"addrs": [
"5MF575NQUDMRWOTS27KIBL2MFPJHKQEEF4LZEN6H3CZDAYVUKESMGZPK3Q",
"FS7G3AHTDVMQNQQBHZYMGNWAX7NV2XAQSACQH3QDBDOW66DYTAQQW76RYA",
"DRSHY5ONWKVMWWASTB7HOELVF5HRUTRQGK53ZK3YNMESZJR6BBLMNH4BBY"
]
},
"signers": ...
}
```
Note that in both the first and the third use cases, the wallet may sign the transaction using a multisig and may use a different authorized address (`authAddr`) than the sender address (i.e., rekeying). The main difference is that in the first case, the wallet knows how to sign the transaction (i.e., whether the sender address is a multisig and/or rekeyed), while in the third case, the wallet may not know it.
### Syntax and Interfaces
[Section titled “Syntax and Interfaces”](#syntax-and-interfaces)
> Interfaces are defined in TypeScript. All the objects that are defined are valid JSON objects.
#### Interface `SignTxnsFunction`
[Section titled “Interface SignTxnsFunction”](#interface-signtxnsfunction)
A wallet transaction signing function `signTxns` is defined by the following interface:
```typescript
export type SignTxnsFunction = (
txns: WalletTransaction[],
opts?: SignTxnsOpts
)
=> Promise<(SignedTxnStr | null)[]>;
```
where:
* `txns` is a non-empty list of `WalletTransaction` objects (defined below).
* `opts` is an optional parameter object `SignTxnsOpts` (defined below).
In case of error, the wallet (i.e., the `signTxns` function in this document) **MUST** reject the promise with an error object `SignTxnsError` defined below. This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”.
#### Interface `AlgorandAddress`
[Section titled “Interface AlgorandAddress”](#interface-algorandaddress)
An Algorand address is represented by a 58-character base32 string. It includes the checksum.
```typescript
export type AlgorandAddress = string;
```
An Algorand address is *valid* is it is a valid base32 string without padding and if the checksum is valid.
> Example: `"6BJ32SU3ABLWSBND7U5H2QICQ6GGXVD7AXSSMRYM2GO3RRNHCZIUT4ISAQ"` is a valid Algorand address.
#### Interface `SignedTxnStr`
[Section titled “Interface SignedTxnStr”](#interface-signedtxnstr)
`SignedTxnStr` is the base64 encoding of the canonical msgpack encoding of a `SignedTxn` object, as defined in the [Algorand specs](https://github.com/algorandfoundation/specs)[. For Algorand version 2.5.5, see the ]()[authorization and signatures Section](https://github.com/algorandfoundation/specs/blob/d050b3cade6d5c664df8bd729bf219f179812595/dev/ledger.md#authorization-and-signatures) of the specs or the [Go structure](https://github.com/algorand/go-algorand/blob/304815d00b9512cf9f91dbb987fead35894676f4/data/transactions/signedtxn.go#L31)
```typescript
export type SignedTxnStr = string;
```
#### Interface `MultisigMetadata`
[Section titled “Interface MultisigMetadata”](#interface-multisigmetadata)
A `MultisigMetadata` object specifies the parameters of an Algorand multisig address.
```typescript
export interface MultisigMetadata {
/**
* Multisig version.
*/
version: number;
/**
* Multisig threshold value. Authorization requires a subset of signatures,
* equal to or greater than the threshold value.
*/
threshold: number;
/**
* List of Algorand addresses of possible signers for this
* multisig. Order is important.
*/
addrs: AlgorandAddress[];
}
```
* `version` should always be 1.
* `threshold` should be between 1 and the length of `addrs`.
> Interface originally from github.com/algorand/js-algorand-sdk/blob/e07d99a2b6bd91c4c19704f107cfca398aeb9619/src/types/multisig.ts, where `string` has been replaced by `AlgorandAddress`.
#### Interface `WalletTransaction`
[Section titled “Interface WalletTransaction”](#interface-wallettransaction)
A `WalletTransaction` object represents a transaction to be signed by a wallet.
```typescript
export interface WalletTransaction {
/**
* Base64 encoding of the canonical msgpack encoding of a Transaction.
*/
txn: string;
/**
* Optional authorized address used to sign the transaction when the account
* is rekeyed. Also called the signor/sgnr.
*/
authAddr?: AlgorandAddress;
/**
* Multisig metadata used to sign the transaction
*/
msig?: MultisigMetadata;
/**
* Optional list of addresses that must sign the transactions
*/
signers?: AlgorandAddress[];
/**
* Optional base64 encoding of the canonical msgpack encoding of a
* SignedTxn corresponding to txn, when signers=[]
*/
stxn?: SignedTxnStr;
/**
* Optional message explaining the reason of the transaction
*/
message?: string;
/**
* Optional message explaining the reason of this group of transaction
* Field only allowed in the first transaction of a group
*/
groupMessage?: string;
}
```
#### Interface `SignTxnsOpts`
[Section titled “Interface SignTxnsOpts”](#interface-signtxnsopts)
A `SignTxnsOps` specifies optional parameters of the `signTxns` function:
```typescript
export type SignTxnsOpts = {
/**
* Optional message explaining the reason of the group of transactions
*/
message?: string;
}
```
#### Error Interface `SignTxnsError`
[Section titled “Error Interface SignTxnsError”](#error-interface-signtxnserror)
In case of error, the `signTxns` function **MUST** return a `SignTxnsError` object
```typescript
interface SignTxnsError extends Error {
code: number;
data?: any;
}
```
where:
* `message`:
* **MUST** be a human-readable string
* **SHOULD** adhere to the specifications in the Error Standards section below
* `code`:
* **MUST** be an integer number
* **MUST** adhere to the specifications in the Error Standards section below
* `data`:
* **SHOULD** contain any other useful information about the error
> Inspired from github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md
### Error Standards
[Section titled “Error Standards”](#error-standards)
| Status Code | Name | Description |
| ----------- | --------------------- | --------------------------------------------------------------------------- |
| 4001 | User Rejected Request | The user rejected the request. |
| 4100 | Unauthorized | The requested operation and/or account has not been authorized by the user. |
| 4200 | Unsupported Operation | The wallet does not support the requested operation. |
| 4201 | Too Many Transactions | The wallet does not support signing that many transactions at a time. |
| 4202 | Uninitialized Wallet | The wallet was not initialized properly beforehand. |
| 4300 | Invalid Input | The input provided is invalid. |
### Wallet-specific extensions
[Section titled “Wallet-specific extensions”](#wallet-specific-extensions)
Wallets **MAY** use specific extension fields in `WalletTransaction` and in `SignTxnsOpts`. These fields must start with: `_walletName`, where `walletName` is the name of the wallet. Wallet designers **SHOULD** ensure that their wallet name is not already used.
> Example of a wallet-specific fields in `opts` (for the wallet `theBestAlgorandWallet`): `_theBestAlgorandWalletIcon` for displaying an icon related to the transactions.
Wallet-specific extensions **MUST** be designed such that a wallet not understanding them would not provide a lower security level.
> Example of a forbidden wallet-specific field in `WalletTransaction`: `_theWorstAlgorandWalletDisable` requires this transaction not to be signed. This is dangerous for security as any signed transaction may leak and be committed by an attacker. Therefore, the dApp should never submit transactions that should not be signed, and that some wallets (not supporting this extension) may still sign.
### Semantic and Security Requirements
[Section titled “Semantic and Security Requirements”](#semantic-and-security-requirements)
The call `signTxns(txns, opts)` **MUST** either throws an error or return an array `ret` of the same length of the `txns` array:
1. If `txns[i].signers` is an empty array, the wallet **MUST NOT** sign the transaction `txns[i]`, and:
* if `txns[i].stxn` is not present, `ret[i]` **MUST** be set to `null`.
* if `txns[i].stxn` is present and is a valid `SignedTxnStr` with the underlying transaction exactly matching `txns[i].txn`, `ret[i]` **MUST** be set to `txns[i].stxn`. (See section on the semantic of `WalletTransaction` for the exact requirements on `txns[i].stxn`.)
* otherwise, the wallet **MUST** throw a `4300` error.
2. Otherwise, the wallet **MUST** sign the transaction `txns[i].txn` and `ret[i]` **MUST** be set to the corresponding `SignedTxnStr`.
Note that if any transaction `txns[i]` that should be signed (i.e., where `txns[i].signers` is not an empty array) cannot be signed for any reason, the wallet **MUST** throw an error.
#### Terminology: Validation, Warnings, Fields
[Section titled “Terminology: Validation, Warnings, Fields”](#terminology-validation-warnings-fields)
All the field names below are the ones in the [Go `SignedTxn` structure](https://github.com/algorand/go-algorand/blob/304815d00b9512cf9f91dbb987fead35894676f4/data/transactions/signedtxn.go#L31) and [](https://github.com/algorand/go-algorand/blob/304815d00b9512cf9f91dbb987fead35894676f4/data/transactions/transaction.go#L81). Field of the actual transaction are prefixed with `txn.` (as opposed to fields of the `WalletTransaction` such as `signers`). For example, the sender of a transaction is `txn.Sender`.
**Rejecting** means throwing a `4300` error.
Strong warning / warning / weak warning / informational messages are different level of alerts. Strong warnings **MUST** be displayed in such a way that the user cannot miss the importance of them.
#### Semantic of `WalletTransaction`
[Section titled “Semantic of WalletTransaction”](#semantic-of-wallettransaction)
* `txn`:
* Must a base64 encoding of the canonical msgpack encoding of a `Transaction` object as defined in the [Algorand specs](https://github.com/algorandfoundation/specs). For Algorand version 2.5.5, see the [authorization and signatures Section](https://github.com/algorandfoundation/specs/blob/d050b3cade6d5c664df8bd729bf219f179812595/dev/ledger.md#authorization-and-signatures) of the specs or the [Go structure](https://github.com/algorand/go-algorand/blob/304815d00b9512cf9f91dbb987fead35894676f4/data/transactions/transaction.go#L81).
* If `txn` is not a base64 string or cannot be decoded into a `Transaction` object, the wallet **MUST** reject.
* `authAddr`:
* The wallet **MAY** not support this field. In that case, it **MUST** throw a `4200` error.
* If specified, it must be a valid Algorand address. If this is not the case, the wallet **MUST** reject.
* If specified and supported, the wallet **MUST** sign the transaction using this authorized address *even if it sees the sender address `txn.Sender` was not rekeyed to `authAddr`*. This is because the sender may be rekeyed before the transaction is committed. The wallet **SHOULD** display an informational message.
* `msig`:
* The wallet **MAY** not support this field. In that case, it **MUST** throw a `4200` error.
* If specified, it must be a valid `MultisigMetadata` object. If this is not the case, the wallet **MUST** reject.
* If specified and supported, the wallet **MUST** verify `msig` matches `authAddr` (if `authAddr` is specified and supported) or the sender address `txn.Sender` (otherwise). The wallet **MUST** reject if this is not the case.
* If specified and supported and if `signers` is not specified, the wallet **MUST** return a `SignedTxn` with all the subsigs that it can provide and that the wallet user agrees to provide. If the wallet can sign more subsigs than the requested threshold (`msig.threshold`), it **MAY** only provide `msig.threshold` subsigs. It is also possible that the wallet cannot provide at least `msig.threshold` subsigs (either because the user prevented signing with some keys or because the wallet does not know enough keys). In that case, the wallet just provide the subsigs it can provide. However, the wallet **MUST** provide at least one subsig or throw an error.
* `signers`:
* If specified and if not a list of valid Algorand addresses, the wallet **MUST** reject.
* If `signers` is an empty array, the transaction is for information purpose only and the wallet **SHALL NOT** sign it, even if it can (e.g., know the secret key of the sender address).
* If `signers` is an array with more than 1 Algorand addresses:
* The wallet **MUST** reject if `msig` is not specified.
* The wallet **MUST** reject if `signers` is not a subset of `msig.addrs`.
* The wallet **MUST** try to return a `SignedTxn` with all the subsigs corresponding to `signers` signed. If it cannot, it **SHOULD** throw a `4001` error. Note that this is different than when `signers` is not provided, where the signing is only “best effort”.
* If `signers` is an array with a single Algorand address:
* If `msig` is specified, the rules as when `signers` is an array with more than 1 Algorand addresses apply.
* If `authAddr` is specified but `msig` is not, the wallet **MUST** reject if `signers[0]` is not equal to `authAddr`.
* If neither `authAddr` nor `msig` are specified, the wallet **MUST** reject if `signers[0]` is not the sender address `txn.Sender`.
* In all cases, the wallet **MUST** only try to provide signatures for `signers[0]`. In particular, if the sender address `txn.Sender` was rekeyed or is a multisig and if `authAddr` and `msig` are not specified, then the wallet **MUST** reject.
* `stxn` if specified:
* If specified and if `signers` is not the empty array, the wallet **MUST** reject.
* If specified:
* It must be a valid `SignedTxnStr`. The wallet **MUST** reject if this is not the case.
* The wallet **MUST** reject if the field `txn` inside the `SignedTxn` object does not match exactly the `Transaction` object in `txn`.
* The wallet **MAY NOT** check whether the other fields of the `SignedTxn` are valid. In particular, it **MAY** accept `stxn` even in the following cases: it contains an invalid signature `sig`, it contains both a signature `sig` and a logicsig `lsig`, it contains a logicsig `lsig` that always reject.
* `message`:
* The wallet **MAY** decide to never print the message, to only print the first characters, or to make any changes to the messages that may be used to ensure a higher level of security. The wallet **MUST** be designed to ensure that the message cannot be easily used to trick the user to do an incorrect action. In particular, if displayed, the message must appear in an area that is easily and clearly identifiable as not trusted by the wallet.
* The wallet **MUST** prevent HTML/JS injection and must only display plaintext messages.
* `groupMessage` obeys the same rules as `message`, except it is a message common to all the transactions of the group containing the current transaction. In addition, the wallet **MUST** reject if `groupMessage` is provided for a transaction that is not the first transaction of the group. Note that `txns` may contain multiple groups of transactions, one after the other (see the Group Validation section for details).
##### Particular Case without `signers`, nor `msig`, nor `senders`
[Section titled “Particular Case without signers, nor msig, nor senders”](#particular-case-without-signers-nor-msig-nor-senders)
When neither `signers`, nor `msig`, nor `authAddr` are specified, the wallet **MAY** still sign the transaction using a multisig or a different authorized address than the sender address `txn.Sender`. It may also sign the transaction using a logicsig.
However, in all these cases, the resulting `SignedTxn` **MUST** be such that it can be committed to the blockchain (assuming the transaction itself can be executed and that the account is not rekeyed in the meantime).
In particular, if a multisig is used, the numbers of subsigs provided must be at least equal to the multisig threshold. This is different from the case where `msig` is provided, where the wallet **MAY** provide fewer subsigs than the threshold.
#### Semantic of `SignTxnsOpts`
[Section titled “Semantic of SignTxnsOpts”](#semantic-of-signtxnsopts)
* `message` obeys the rules as `WalletTransaction.message` except it is a message common to all transactions.
#### General Validation
[Section titled “General Validation”](#general-validation)
The goal is to ensure the highest level of security for the end-user, even when the transaction is generated by a malicious dApp. Every input must be validated.
Validation:
* **SHALL NOT** rely on TypeScript typing as this can be bypassed. Types **MUST** be manually verified.
* **SHALL NOT** assume the Algorand SDK does any validation, as the Algorand SDK is not meant to receive maliciously generated inputs. Furthermore, the SDK allows for dangerous transactions (such as rekeying). The only exception for the above rule is for de-serialization of transactions. Once de-serialized, every field of the transaction must be manually validated.
> Note: We will be working with the algosdk team to provide helper functions for validation in some cases and to ensure the security of the de-serialization of potentially malicious transactions.
If there is any unexpected field at any level (both in the transaction itself or in the object WalletTransaction), the wallet **MUST** immediately reject. The only exception is for the “wallet-specific extension” fields (see above).
#### Group Validation
[Section titled “Group Validation”](#group-validation)
The wallet should support the following two use cases:
1. (**REQUIRED**) `txns` is a non-empty array of transactions that belong to the same group of transactions. In other words, either `txns` is an array of a single transaction with a zero group ID (`txn.Group`), or `txns` is an array of one or more transactions with the *same* non-zero group ID. The wallet **MUST** reject if the transactions do not match their group ID. (The dApp must provide the transactions in the order defined by the group ID.)
> An early draft of this ARC required that the size of a group of transactions must be greater than 1 but, since the Algorand protocol supports groups of size 1, this requirement had been changed so dApps don’t have to have special cases for single transactions and can always send a group to the wallet.
2. (**OPTIONAL**) `txns` is a concatenation of `txns` arrays of transactions of type 1:
* All transactions with the *same* non-zero group ID must be consecutive and must match their group ID. The wallet **MUST** reject if the above is not satisfied.
* The wallet UI **MUST** be designed so that it is clear to the user when transactions are grouped (aka form an atomic transfers) and when they are not. It **SHOULD** provide very clear explanations that are understandable by beginner users, so that they cannot easily be tricked to sign what they believe is an atomic exchange while it is in actuality a one-sided payment.
If `txns` does not match any of the formats above, the wallet **MUST** reject.
The wallet **MAY** choose to restrict the maximum size of the array `txns`. The maximum size allowed by a wallet **MUST** be at least the maximum size of a group of transactions in the current Algorand protocol on MainNet. (When this ARC was published, this maximum size was 16.) If the wallet rejects `txns` because of its size, it **MUST** throw a 4201 error.
An early draft of this API allowed to sign single transactions in a group without providing the other transactions in the group. For security reasons, this use case is now deprecated and **SHALL** not be allowed in new implementations. Existing implementations may continue allowing for single transactions to be signed if a very clear warning is displayed to the user. The warning **MUST** stress that signing the transaction may incur losses that are much higher than the amount of tokens indicated in the transaction. That is because potential future features of Algorand may later have such consequences (e.g., a signature of a transaction may actually authorize the full group under some circumstances).
#### Transaction Validation
[Section titled “Transaction Validation”](#transaction-validation)
##### Inputs that Must Be Systematically Rejected
[Section titled “Inputs that Must Be Systematically Rejected”](#inputs-that-must-be-systematically-rejected)
* Transactions `WalletTransaction.txn` with fields that are not known by the wallet **MUST** be systematically rejected. In particular:
* Every field **MUST** be validated.
* Any extra field **MUST** systematically make the wallet reject.
* This is to prevent any security issue in case of the introduction of new dangerous fields (such as `txn.RekeyTo` or `txn.CloseRemainderTo`).
* Transactions of an unknown type (field `txn.Type`) **MUST** be rejected.
* Transactions containing fields of a different transaction type (e.g., `txn.Receiver` in an asset transfer transaction) **MUST** be rejected.
##### Inputs that Warrant Display of Warnings
[Section titled “Inputs that Warrant Display of Warnings”](#inputs-that-warrant-display-of-warnings)
The wallet **MUST**:
* Display a strong warning message when signing a transaction with one of the following fields: `txn.RekeyTo`, `txn.CloseRemainderTo`, `txn.AssetCloseTo`. The warning message **MUST** clearly explain the risks. No warning message is necessary for transactions that are provided for informational purposes in a group and are not signed (i.e., transactions with `signers=[]`).
* Display a strong warning message in case the transaction is signed in the future (first valid round is after current round plus some number, e.g. 500). This is to prevent surprises in the future where a user forgot that they signed a transaction and the dApp maliciously play it later.
* Display a warning message when the fee is too high. The threshold **MAY** depend on the load of the Algorand network.
* Display a weak warning message when signing a transaction that can increase the minimum balance in a way that may be hard or impossible to undo (asset creation or application creation)
* Display an informational message when signing a transaction that can increase the minimum balance in a way that can be undone (opt-in to asset or transaction)
The above is for version 2.5.6 of the Algorand software. Future consensus versions may require additional checks.
Before supporting any new transaction field or type (for a new version of the Algorand blockchain), the wallet authors **MUST** be perform a careful security analysis.
#### Genesis Validation
[Section titled “Genesis Validation”](#genesis-validation)
The wallet **MUST** check that the genesis hash (field `txn.GenesisHash`) and the genesis ID (field `txn.GenesisID`, if provided) match the network used by the wallet. If the wallet supports multiple networks, it **MUST** make clear to the user which network is used.
#### UI
[Section titled “UI”](#ui)
In general, the UI **MUST** ensure that the user cannot be confused by the dApp to perform dangerous operations. In particular, the wallet **MUST** make clear to the user what is part of the wallet UI from what is part of what the dApp provided.
Special care **MUST** be taken of when:
* Displaying the `message` field of `WalletTransaction` and of `SignTxnsOpts`.
* Displaying any arbitrary field of transactions including note field (`txn.Note`), genesis ID (`txn.genesisID`), asset configuration fields (`txn.AssetName`, `txn.UnitName`, `txn.URL`, …)
* Displaying message hidden in fields that are expected to be base32/base64-strings or addresses. Using a different font for those fields **MAY** be an option to prevent such confusion.
Usual precautions **MUST** be taken regarding the fact that the inputs are provided by an untrusted dApp (e.g., preventing code injection and so on).
## Rationale
[Section titled “Rationale”](#rationale)
The API was designed to:
* Be easily implementable by all Algorand wallets
* Rely on the official [specs](https://github.com/algorandfoundation/specs/blob/master/dev/ledger.md) and the [official source code](https://github.com/algorand/go-algorand/blob/304815d00b9512cf9f91dbb987fead35894676f4/data/transactions/signedtxn.go#L31).
* Only use types supported by JSON to simplify interoperability (avoid Uint8Array for example) and to allow easy serialization / deserialization
* Be easy to extend to support future features of Algorand
* Be secure by design: making it hard for malicious dApps to cause the wallet to sign a transaction without the user understanding the implications of their signature.
The API was not designed to:
* Directly support of the SDK objects. SDK objects must first be serialized.
* Support any listing accounts, connecting to the wallet, sending transactions, …
* Support of signing logic signatures.
The last two items are expected to be defined in other documents.
### Rationale for Group Validation
[Section titled “Rationale for Group Validation”](#rationale-for-group-validation)
The requirements around group validation have been designed to prevent the following attack.
The dApp pretends to buy 1 Algo for 10 USDC, but instead creates an atomic transfer with the user sending 1 Algo to the dApp and the dApp sending 0.01 USDC to the user. However, it sends to the wallet a 1 Algo and 10 USDC transactions. If the wallet does not verify that this is a valid group, it will make the user believe that they are signing for the correct atomic transfer.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
> This section is non-normative.
### Sign a Group of Two Transactions
[Section titled “Sign a Group of Two Transactions”](#sign-a-group-of-two-transactions)
Here is an example in node.js how to use the wallet interface to sign a group of two transactions and send them to the network. The function `signTxns` is assumed to be a method of `algorandWallet`.
> Note: We will be working with the algosdk development to add two helper functions to facilitate the use of the wallet. Current idea is to add: `Transaction.toBase64` that does the same as `Transaction.toByte` except it outputs a base64 string `Algodv2.sendBase64RawTransactions` that does the same as `Algodv2.sendRawTransactions` except it takes an array of base64 string instead of an array of Uint8array
```typescript
import algosdk from 'algosdk';
import * as algorandWallet from './wallet';
import {Buffer} from "buffer";
const firstRound = 13809129;
const suggestedParams = {
flatFee: false,
fee: 0,
firstRound: firstRound,
lastRound: firstRound + 1000,
genesisID: 'testnet-v1.0',
genesisHash: 'SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI='
};
const txn1 = algosdk.makePaymentTxnWithSuggestedParamsFromObject({
from: "37MSZIPXHGNCKTDJTJDSYIOF4C57JAL2FTKESD2HBVELXYHEIXVZ4JVGFU",
to: "PKSE2TARC645D4O2IO6QNWVW6PLJDTR6IOKNKMGSHQL7JIJHNGNFVISUHI",
amount: 1000,
suggestedParams,
});
const txn2 = algosdk.makePaymentTxnWithSuggestedParamsFromObject({
from: "37MSZIPXHGNCKTDJTJDSYIOF4C57JAL2FTKESD2HBVELXYHEIXVZ4JVGFU",
to: "PKSE2TARC645D4O2IO6QNWVW6PLJDTR6IOKNKMGSHQL7JIJHNGNFVISUHI",
amount: 2000,
suggestedParams,
});
const txs = [txn1, txn2];
algosdk.assignGroupID(txs);
const txn1B64 = Buffer.from(txn1.toByte()).toString("base64");
const txn2B64 = Buffer.from(txn2.toByte()).toString("base64");
(async () => {
const signedTxs = await algorandWallet.signTxns([
{txn: txn1B64},
{txn: txn2B64, signers: []}
]);
const algodClient = new algosdk.Algodv2("", "...", "");
algodClient.sendRawTransaction(
signedTxs.map(stxB64 => Buffer.from(stxB64, "base64"))
)
})();
```
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Transaction Note Field Conventions
> Conventions for encoding data in the note field at application-level
## Abstract
[Section titled “Abstract”](#abstract)
The goal of these conventions is to make it simpler for block explorers and indexers to parse the data in the note fields and filter transactions of certain dApps.
## Specification
[Section titled “Specification”](#specification)
Note fields should be formatted as follows:
for dApps
```plaintext
:
```
for ARCs
```plaintext
arc:
```
where:
* `` is the name of the dApp:
* Regexp to satisfy: `[a-zA-Z0-9][a-zA-Z0-9_/@.-]{4-31}` In other words, a name should:
* only contain alphanumerical characters or `_`, `/`, `-`, `@`, `.`
* start with an alphanumerical character
* be at least 5 characters long
* be at most 32 characters long
* Names starting with `a/` and `af/` are reserved for the Algorand protocol and the Algorand Foundation uses.
* `` is the number of the ARC:
* Regexp to satisfy: `\b(0|[1-9]\d*)\b` In other words, an arc-number should:
* Only contain a digit number, without any padding
* `` is one of the following:
* `m`: [MsgPack](https://msgpack.org)
* `j`: [JSON](https://json.org)
* `b`: arbitrary bytes
* `u`: utf-8 string
* `` is the actual data in the format specified by ``
**WARNING**: Any user can create transactions with arbitrary data and may impersonate other dApps. In particular, the fact that a note field start with `` does not guarantee that it indeed comes from this dApp. The value `` cannot be relied upon to ensure provenance and validity of the ``.
**WARNING**: Any user can create transactions with arbitrary data, including ARC numbers, which may not correspond to the intended standard. An ARC number included in a note field does not ensure compliance with the corresponding standard. The value of the ARC number cannot be relied upon to ensure the provenance and validity of the .
### Versioning
[Section titled “Versioning”](#versioning)
This document suggests the following convention for the names of dApp with multiple versions: `mydapp/v1`, `mydapp/v2`, … However, dApps are free to use any other convention and may include the version inside the `` part instead of the `` part.
## Rationale
[Section titled “Rationale”](#rationale)
The goal of these conventions is to facilitate displaying notes by block explorers and filtering of transactions by notes. However, the note field **cannot be trusted**, as any user can create transactions with arbitrary note fields. An external mechanism needs to be used to ensure the validity and provenance of the data. For example:
* Some dApps may only send transactions from a small set of accounts controlled by the dApps. In that case, the sender of the transaction should be checked.
* Some dApps may fund escrow accounts created from some template TEAL script. In that case, the note field may contain the template parameters and the escrow account address should be checked to correspond to the resulting TEAL script.
* Some dApps may include a signature in the `` part of the note field. The `` may be an MsgPack encoding of a structure of the form:
```json
{
"d": ... // actual data
"sig": ... // signature of the actual data (encoded using MsgPack)
}
```
In that case, the signature should be checked.
The conventions were designed to support multiple use cases of the notes. Some dApps may just record data on the blockchain without using any smart contracts. Such dApps typically would use JSON or MsgPack encoding.
On the other hands, dApps that need reading note fields from smart contracts most likely would require easier-to-parse formats of data, which would most likely consist in application-specific byte strings.
Since `:` is a prefix of the note, transactions for a given dApp can easily be filtered by the [indexer](https://github.com/algorand/indexer) ().
The restrictions on dApp names were chosen to allow most usual names while avoiding any encoding or displaying issues. The maximum length (32) matches the maximum length of ASA on Algorand, while the minimum length (5) has been chosen to limit collisions.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
> This section is non-normative.
Consider [ARC-20](/arc-standards/arc-0020), that provides information about Smart ASA’s Application.
Here a potential note indicating that the Application ID is 123:
* JSON without version:
```plaintext
arc20:j{"application-id":123}
```
Consider a dApp named `algoCityTemp` that stores temperatures from cities on the blockchain.
Here are some potential notes indicating that Singapore’s temperature is 35 degree Celsius:
* JSON without version:
```plaintext
algoCityTemp:j{"city":"Singapore","temp":35}
```
* JSON with version in the name:
```plaintext
algoCityTemp/v1:j{"city":"Singapore","temp":35}
```
* JSON with version in the name with index lookup:
```plaintext
algoCityTemp/v1/35:j{"city":"Singapore","temp":35}
```
* JSON with version in the data:
```plaintext
algoCityTemp:j{"city":"Singapore","temp":35,"ver":1}
```
* UTF-8 string without version:
```plaintext
algoCityTemp:uSingapore|35
```
* Bytes where the temperature is encoded as a signed 1-byte integer in the first position:
```plaintext
algoCityTemp:b#Singapore
```
(`#` is the ASCII character for 35.)
* MsgPack corresponding to the JSON example with version in the name. The string is encoded in base64 as it contains characters that cannot be printed in this document. But the note should contain the actual bytes and not the base64 encoding of them:
```plaintext
YWxnb0NpdHlUZW1wL3YxOoKkY2l0ealTaW5nYXBvcmWkdGVtcBg=
```
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
> Not Applicable
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Conventions Fungible/Non-Fungible Tokens
> Parameters Conventions for Algorand Standard Assets (ASAs) for fungible tokens and non-fungible tokens (NFTs).
## Abstract
[Section titled “Abstract”](#abstract)
The goal of these conventions is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to display the properties of a given ASA.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
An [ARC-3](/arc-standards/arc-0003) ASA has an associated JSON Metadata file, formatted as specified below, that is stored off-chain.
### ASA Parameters Conventions
[Section titled “ASA Parameters Conventions”](#asa-parameters-conventions)
The ASA parameters should follow the following conventions:
* *Unit Name* (`un`): no restriction but **SHOULD** be related to the name in the JSON Metadata file
* *Asset Name* (`an`): **MUST** be:
* (**NOT RECOMMENDED**) either exactly `arc3` (without any space)
* (**NOT RECOMMENDED**) or `@arc3`, where `` **SHOULD** be closely related to the name in the JSON Metadata file:
* If the resulting asset name can fit the *Asset Name* field, then `` **SHOULD** be equal to the name in the JSON Metadata file.
* If the resulting asset name cannot fit the *Asset Name* field, then `` **SHOULD** be a reasonable shorten version of the name in the JSON Metadata file.
* (**RECOMMENDED**) or `` where `` is defined as above. In this case, the Asset URL **MUST** end with `#arc3`.
* *Asset URL* (`au`): a URI pointing to a JSON Metadata file.
* This URI as well as any URI in the JSON Metadata file:
* **SHOULD** be persistent and allow to download the JSON Metadata file forever.
* **MAY** contain the string `{id}`. If `{id}` exists in the URI, clients **MUST** replace this with the asset ID in decimal form. The rules below applies after such a replacement.
* **MUST** follow [RFC-3986](https://www.ietf.org/rfc/rfc3986.txt) and **MUST NOT** contain any whitespace character
* **SHOULD** use one of the following URI schemes (for compatibility and security): *https* and *ipfs*:
* When the file is stored on IPFS, the `ipfs://...` URI **SHOULD** be used. IPFS Gateway URI (such as `https://ipfs.io/ipfs/...`) **SHOULD NOT** be used.
* **SHOULD NOT** use the following URI scheme: *http* (due to security concerns).
* **MUST** be such that the returned resource includes the CORS header
```plaintext
Access-Control-Allow-Origin: *
```
if the URI scheme is *https*
> This requirement is to ensure that client JavaScript can load all resources pointed by *https* URIs inside an ARC-3 ASA.
* **MAY** be a relative URI when inside the JSON Metadata file. In that case, the relative URI is relative to the Asset URL. The Asset URL **SHALL NOT** be relative. Relative URI **MUST** not contain the character `:`. Clients **MUST** consider a URI as relative if and only if it does not contain the character `:`.
* If the Asset Name is neither `arc3` nor of the form `@arc3`, then the Asset URL **MUST** end with `#arc3`.
* If the Asset URL ends with `#arc3`, clients **MUST** remove `#arc3` when linking to the URL. When displaying the URL, they **MAY** display `#arc3` in a different style (e.g., a lighter color).
* If the Asset URL ends with `#arc3`, the full URL with `#arc3` **SHOULD** be valid and point to the same resource as the URL without `#arc3`.
> This recommendation is to ensure backward compatibility with wallets that do not support ARC-3.
* *Asset Metadata Hash* (`am`):
* If the JSON Metadata file specifies extra metadata `e` (property `extra_metadata`), then `am` is defined as:
```plain
am = SHA-512/256("arc0003/am" || SHA-512/256("arc0003/amj" || content of JSON Metadata file) || e)
```
where `||` denotes concatenation and SHA-512/256 is defined in [NIST FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4). The above definition of `am` **MUST** be used when the property `extra_metadata` is specified, even if its value `e` is the empty string. Python code to compute the hash and a full example are provided below (see “Sample with Extra Metadata”).
> Extra metadata can be used to store data about the asset that needs to be accessed from a smart contract. The smart contract would not be able to directly read the metadata. But, if provided with the hash of the JSON Metadata file and with the extra metadata `e`, the smart contract can check that `e` is indeed valid.
* If the JSON Metadata file does not specify the property `extra_metadata`, then `am` is defined as the SHA-256 digest of the JSON Metadata file as a 32-byte string (as defined in [NIST FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4))
There are no requirements regarding the manager account of the ASA, or its the reserve account, freeze account, or clawback account.
> Clients recognize ARC-3 ASAs by looking at the Asset Name and Asset URL. If the Asset Name is `arc3` or ends with `@arc3`, or if the Asset URL ends with `#arc3`, the ASA is to be considered an ARC-3 ASA.
#### Pure and Fractional NFTs
[Section titled “Pure and Fractional NFTs”](#pure-and-fractional-nfts)
An ASA is said to be a *pure non-fungible token* (*pure NFT*) if and only if it has the following properties:
* *Total Number of Units* (`t`) **MUST** be 1.
* *Number of Digits after the Decimal Point* (`dc`) **MUST** be 0.
An ASA is said to be a *fractional non-fungible token* (*fractional NFT*) if and only if it has the following properties:
* *Total Number of Units* (`t`) **MUST** be a power of 10 larger than 1: 10, 100, 1000, …
* *Number of Digits after the Decimal Point* (`dc`) **MUST** be equal to the logarithm in base 10 of total number of units.
> In other words, the total supply of the ASA is exactly 1.
### JSON Metadata File Schema
[Section titled “JSON Metadata File Schema”](#json-metadata-file-schema)
> The JSON Medata File schema follow the Ethereum Improvement Proposal [ERC-1155 Metadata URI JSON Schema](https://eips.ethereum.org/EIPS/eip-1155) with the following main differences:
>
> * Support for integrity fields for any file pointed by any URI field as well as for localized JSON Metadata files.
> * Support for mimetype fields for any file pointed by any URI field.
> * Support for extra metadata that is hashed as part of the Asset Metadata Hash (`am`) of the ASA.
> * Adding the fields `external_url`, `background_color`, `animation_url` used by [OpenSea metadata format](https://docs.opensea.io/docs/metadata-standards).
Similarly to ERC-1155, the URI does support ID substitution. If the URI contains `{id}`, clients **MUST** substitute it by the asset ID in *decimal*.
> Contrary to ERC-1155, the ID is represented in decimal (instead of hexadecimal) to match what current APIs and block explorers use on the Algorand blockchain.
The JSON Metadata schema is as follows:
```json
{
"title": "Token Metadata",
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Identifies the asset to which this token represents"
},
"decimals": {
"type": "integer",
"description": "The number of decimal places that the token amount should display - e.g. 18, means to divide the token amount by 1000000000000000000 to get its user representation."
},
"description": {
"type": "string",
"description": "Describes the asset to which this token represents"
},
"image": {
"type": "string",
"description": "A URI pointing to a file with MIME type image/* representing the asset to which this token represents. Consider making any images at a width between 320 and 1080 pixels and aspect ratio between 1.91:1 and 4:5 inclusive."
},
"image_integrity": {
"type": "string",
"description": "The SHA-256 digest of the file pointed by the URI image. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)."
},
"image_mimetype": {
"type": "string",
"description": "The MIME type of the file pointed by the URI image. MUST be of the form 'image/*'."
},
"background_color": {
"type": "string",
"description": "Background color do display the asset. MUST be a six-character hexadecimal without a pre-pended #."
},
"external_url": {
"type": "string",
"description": "A URI pointing to an external website presenting the asset."
},
"external_url_integrity": {
"type": "string",
"description": "The SHA-256 digest of the file pointed by the URI external_url. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)."
},
"external_url_mimetype": {
"type": "string",
"description": "The MIME type of the file pointed by the URI external_url. It is expected to be 'text/html' in almost all cases."
},
"animation_url": {
"type": "string",
"description": "A URI pointing to a multi-media file representing the asset."
},
"animation_url_integrity": {
"type": "string",
"description": "The SHA-256 digest of the file pointed by the URI external_url. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)."
},
"animation_url_mimetype": {
"type": "string",
"description": "The MIME type of the file pointed by the URI animation_url. If the MIME type is not specified, clients MAY guess the MIME type from the file extension or MAY decide not to display the asset at all. It is STRONGLY RECOMMENDED to include the MIME type."
},
"properties": {
"type": "object",
"description": "Arbitrary properties (also called attributes). Values may be strings, numbers, object or arrays."
},
"extra_metadata": {
"type": "string",
"description": "Extra metadata in base64. If the field is specified (even if it is an empty string) the asset metadata (am) of the ASA is computed differently than if it is not specified."
},
"localization": {
"type": "object",
"required": ["uri", "default", "locales"],
"properties": {
"uri": {
"type": "string",
"description": "The URI pattern to fetch localized data from. This URI should contain the substring `{locale}` which will be replaced with the appropriate locale value before sending the request."
},
"default": {
"type": "string",
"description": "The locale of the default data within the base JSON"
},
"locales": {
"type": "array",
"description": "The list of locales for which data is available. These locales should conform to those defined in the Unicode Common Locale Data Repository (http://cldr.unicode.org/)."
},
"integrity": {
"type": "object",
"patternProperties": {
".*": { "type": "string" }
},
"description": "The SHA-256 digests of the localized JSON files (except the default one). The field name is the locale. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)."
}
}
}
}
}
```
All the fields are **OPTIONAL**. But if provided, they **MUST** match the description in the JSON schema.
The field `decimals` is **OPTIONAL**. If provided, it **MUST** match the ASA parameter `dt`.
URI fields (`image`, `external_url`, `animation_url`, and `localization.uri`) in the JSON Metadata file are defined similarly as the Asset URL parameter `au`. However, contrary to the Asset URL, they **MAY** be relative (to the Asset URL). See Asset URL above.
#### Integrity Fields
[Section titled “Integrity Fields”](#integrity-fields)
Compared to ERC-1155, the JSON Metadata schema allows to indicate digests of the files pointed by any URI field. This is to ensure the integrity of all the files referenced by the ASA. Concretly, every URI field `xxx` is allowed to have an optional associated field `xxx_integrity` that specifies the digest of the file pointed by the URI.
The digests are represented as a single SHA-256 integrity metadata as defined in the [W3C subresource integrity specification](https://w3c.github.io/webappsec-subresource-integrity). Details on how to generate those digests can be found on the [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity) (where `sha384` or `384` are to be replaced by `sha256` and `256` respectively as only SHA-256 is supported by this ARC).
It is **RECOMMENDED** to specify all the `xxx_integrity` fields of all the `xxx` URI fields, except for `external_url_integrity` when it points to a potentially mutable website.
Any field with a name ending with `_integrity` **MUST** match a corresponding field containing a URI to a file with a matching digest. For example, if the field `hello_integrity` is specified, the field `hello` **MUST** exist and **MUST** be a URI pointing to a file with a digest equal to the digest specified by `hello_integrity`.
#### MIME Type Files
[Section titled “MIME Type Files”](#mime-type-files)
Compared to ERC-1155, the JSON Metadata schema allows to indicate the MIME type of the files pointed by any URI field. This is to allow clients to display appropriately the resource without having to first query it to find out the MIME type. Concretly, every URI field `xxx` is allowed to have an optional associated field `xxx_integrity` that specifies the digest of the file pointed by the URI.
It is **STRONGLY RECOMMENDED** to specify all the `xxx_mimetype` fields of all the `xxx` URI fields, except for `external_url_mimetype` when it points to a website. If the MIME type is not specified, clients **MAY** guess the MIME type from the file extension or **MAY** decide not to display the asset at all.
Clients **MUST NOT** rely on the `xxx_mimetype` fields from a security perspective and **MUST NOT** break or fail if the fields are incorrect (beyond not displaying the asset image or animation correctly). In particular, clients **MUST** take all necessary security measures to protect users against remote code execution or cross-site scripting attacks, even when the MIME type looks innocuous (like `image/png`).
> The above restriction is to protect clients and users against malformed or malicious ARC-3.
Any field with a name ending with `_mimetype` **MUST** match a corresponding field containing a URI to a file with a matching digest. For example, if the field `hello_mimetype` is specified, the field `hello` **MUST** exist and **MUST** be a URI pointing to a file with a digest equal to the digest specified by `hello_mimetype`.
#### Localization
[Section titled “Localization”](#localization)
If the JSON Metadata file contains a `localization` attribute, its content **MAY** be used to provide localized values for fields that need it. The `localization` attribute should be a sub-object with three **REQUIRED** attributes: `uri`, `default`, `locales`, and one **RECOMMENDED** attribute: `integrity`. If the string `{locale}` exists in any URI, it **MUST** be replaced with the chosen locale by all client software.
> Compared to ERC-1155, the `localization` attribute contains an additional optional `integrity` field that specify the digests of the localized JSON files.
It is **RECOMMENDED** that `integrity` contains the digests of all the locales but the default one.
#### Examples
[Section titled “Examples”](#examples)
##### Basic Example
[Section titled “Basic Example”](#basic-example)
An example of an ARC-3 JSON Metadata file for a song follows. The properties array proposes some **SUGGESTED** formatting for token-specific display properties and metadata.
```json
{
"name": "My Song",
"description": "My first and best song!",
"image": "https://s3.amazonaws.com/your-bucket/song/cover/mysong.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"image_mimetype": "image/png",
"external_url": "https://mysongs.com/song/mysong",
"animation_url": "https://s3.amazonaws.com/your-bucket/song/preview/mysong.ogg",
"animation_url_integrity": "sha256-LwArA6xMdnFF3bvQjwODpeTG/RVn61weQSuoRyynA1I=",
"animation_url_mimetype": "audio/ogg",
"properties": {
"simple_property": "example value",
"rich_property": {
"name": "Name",
"value": "123",
"display_value": "123 Example Value",
"class": "emphasis",
"css": {
"color": "#ffffff",
"font-weight": "bold",
"text-decoration": "underline"
}
},
"array_property": {
"name": "Name",
"value": [1,2,3,4],
"class": "emphasis"
}
}
}
```
In the example, the `image` field **MAY** be the album cover, while the `animation_url` **MAY** be the full song or may just be a small preview. In the latter case, the full song **MAY** be specified by three additional properties inside the `properties` field:
```json
{
...
"properties": {
...
"file_url": "https://s3.amazonaws.com/your-bucket/song/full/mysong.ogg",
"file_url_integrity": "sha256-7IGatqxLhUYkruDsEva52Ku43up6774yAmf0k98MXnU=",
"file_url_mimetype": "audio/ogg"
}
}
```
An example of possible ASA parameters would be:
* *Asset Unit*: `mysong` for example
* *Asset Name*: `My Song`
* *Asset URL*: `https://example.com/mypict#arc3` or `https://arweave.net/MAVgEMO3qlqe-qHNVs00qgwwbCb6FY2k15vJP3gBLW4#arc3`
* *Metadata Hash*: the 32 bytes of the SHA-256 digest of the above JSON file
* *Total Number of Units*: 100
* *Number of Digits after the Decimal Point*: 2
> IPFS urls of the form `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT#arc3` may be used too but may cause issue with clients that do not support ARC-3 and that do not handle fragments in IPFS URLs.
Example of alternative versions for *Asset Name* and *Asset URL*:
* *Asset Name*: `My Song@arc3` or `arc3`
* *Asset URL*: `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT` or `https://example.com/mypict` or `https://arweave.net/MAVgEMO3qlqe-qHNVs00qgwwbCb6FY2k15vJP3gBLW4`
> These alternative versions are less recommended as they make the asset name harder to read for clients that do not support ARC-3.
The above parameters define a fractional NFT with 100 shares. The JSON Metadata file **MAY** contain the field `decimals: 2`:
```json
{
...
"decimals": 2
}
```
##### Example with Relative URI and IPFS
[Section titled “Example with Relative URI and IPFS”](#example-with-relative-uri-and-ipfs)
> When using IPFS, it is convenient to bundle the JSON Metadata file with other files references by the JSON Metadata file. In this case, because of circularity, it is necessary to use relative URI
An example of an ARC-3 JSON Metadata file using IPFS and relative URI is provided below:
```json
{
"name": "My Song",
"description": "My first and best song!",
"image": "mysong.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"image_mimetype": "image/png",
"external_url": "https://mysongs.com/song/mysong",
"animation_url": "mysong.ogg",
"animation_url_integrity": "sha256-LwArA6xMdnFF3bvQjwODpeTG/RVn61weQSuoRyynA1I=",
"animation_url_mimetype": "audio/ogg"
}
```
If the Asset URL is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/metadata.json`:
* the `image` URI is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/mysong.png`.
* the `animation_url` URI is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/mysong.ogg`.
##### Example with Extra Metadata and `{id}`
[Section titled “Example with Extra Metadata and {id}”](#example-with-extra-metadata-and-id)
An example of an ARC-3 JSON Metadata file with extra metadata and `{id}` is provided below.
```json
{
"name": "My Picture",
"description": "Lorem ipsum...",
"image": "https://s3.amazonaws.com/your-bucket/images/{id}.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"image_mimetype": "image/png",
"external_url": "https://mysongs.com/song/{id}",
"extra_metadata": "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc="
}
```
The possible ASA parameters are the same as with the basic example, except for the metadata hash that would be the 32-byte string corresponding to the base64 string `xsmZp6lGW9ktTWAt22KautPEqAmiXxow/iIuJlRlHIg=`.
> For completeness, we provide below a Python program that computes this metadata hash:
```python
import base64
import hashlib
extra_metadata_base64 = "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc="
extra_metadata = base64.b64decode(extra_metadata_base64)
json_metadata = """{
"name": "My Picture",
"description": "Lorem ipsum...",
"image": "https://s3.amazonaws.com/your-bucket/images/{id}.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"image_mimetype": "image/png",
"external_url": "https://mysongs.com/song/{id}",
"extra_metadata": "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc="
}"""
h = hashlib.new("sha512_256")
h.update(b"arc0003/amj")
h.update(json_metadata.encode("utf-8"))
json_metadata_hash = h.digest()
h = hashlib.new("sha512_256")
h.update(b"arc0003/am")
h.update(json_metadata_hash)
h.update(extra_metadata)
am = h.digest()
print("Asset metadata in base64: ")
print(base64.b64encode(am).decode("utf-8"))
```
#### Localized Example
[Section titled “Localized Example”](#localized-example)
An example of an ARC-3 JSON Metadata file with localized metadata is presented below.
Base metadata file:
```json
{
"name": "Advertising Space",
"description": "Each token represents a unique Ad space in the city.",
"localization": {
"uri": "ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/{locale}.json",
"default": "en",
"locales": [
"en",
"es",
"fr"
],
"integrity": {
"es": "sha256-T0UofLOqdamWQDLok4vy/OcetEFzD8dRLig4229138Y=",
"fr": "sha256-UUM89QQlXRlerdzVfatUzvNrEI/gwsgsN/lGkR13CKw="
}
}
}
```
File `es.json`:
```json
{
"name": "Espacio Publicitario",
"description": "Cada token representa un espacio publicitario único en la ciudad."
}
```
File `fr.json`:
```json
{
"name": "Espace Publicitaire",
"description": "Chaque jeton représente un espace publicitaire unique dans la ville."
}
```
Note that if the base metadata file URI (i.e., the Asset URL) is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/metadata.json`, then the `uri` field inside the `localization` field may be the relative URI `{locale}.json`.
## Rationale
[Section titled “Rationale”](#rationale)
These conventions are heavily based on Ethereum Improvement Proposal [ERC-1155 Metadata URI JSON Schema](https://eips.ethereum.org/EIPS/eip-1155) to facilitate interoperobility.
The main differences are highlighted below:
* Asset Name and Asset Unit can be optionally specified in the ASA parameters. This is to allow wallets that are not aware of ARC-3 or that are not able to retrieve the JSON file to still display meaningful information.
* A digest of the JSON Metadata file is included in the ASA parameters to ensure integrity of this file. This is redundant with the URI when IPFS is used. But this is important to ensure the integrity of the JSON file when IPFS is not used.
* Similarly, the JSON Metadata schema is changed to allow to specify the SHA-256 digests of the localized versions as well as the SHA-256 digests of any file pointed by a URI property.
* MIME type fields are added to help clients know how to display the files pointed by URI.
* When extra metadata are provided, the Asset Metadata Hash parameter is computed using SHA-512/256 with prefix for proper domain separation. SHA-512/256 is the hash function used in Algorand in general (see the list of prefixes in ). Domain separation is especially important in this case to avoid mixing hash of the JSON Metadata file with extra metadata. However, since SHA-512/256 is less common and since not every tool or library allows to compute SHA-512/256, when no extra metadata is specified, SHA-256 is used instead.
* Support for relative URI is added to allow storing both the JSON Metadata files and the files it refers to in the same IPFS directory.
Valid JSON Metadata files for ERC-1155 are valid JSON Metadata files for ARC-3. However, it is highly recommended that users always include the additional RECOMMENDED fields, such as the integrity fields.
The asset name is either `arc3` or suffixed by `@arc3` to allow client software to know when an asset follows the conventions.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
> Not Applicable
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Application Binary Interface (ABI)
> Conventions for encoding method calls in Algorand Application
## Abstract
[Section titled “Abstract”](#abstract)
This document introduces conventions for encoding method calls, including argument and return value encoding, in Algorand Application call transactions. The goal is to allow clients, such as wallets and dapp frontends, to properly encode call transactions based on a description of the interface. Further, explorers will be able to show details of these method invocations.
### Definitions
[Section titled “Definitions”](#definitions)
* **Application:** an Algorand Application, aka “smart contract”, “stateful contract”, “contract”, or “app”.
* **HLL:** a higher level language that compiles to TEAL bytecode.
* **dapp (frontend)**: a decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with Applications on the blockchain.
* **wallet**: an off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts.
* **explorer**: an off-chain application that allows browsing the blockchain, showing details of transactions.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
Interfaces are defined in TypeScript. All the objects that are defined are valid JSON objects, and all JSON `string` types are UTF-8 encoded.
### Overview
[Section titled “Overview”](#overview)
This document makes recommendations for encoding method invocations as Application call transactions, and for describing methods for access by higher-level entities. Encoding recommendations are intended to be minimal, intended only to allow interoperability among Applications. Higher level recommendations are intended to enhance user-facing interfaces, such as high-level languages, dapps, and wallets. Applications that follow the recommendations described here are called *[ARC-4](/arc-standards/arc-0004) Applications*.
### Methods
[Section titled “Methods”](#methods)
A method is a section of code intended to be invoked externally with an Application call transaction. A method must have a name, it may take a list of arguments as input when it is invoked, and it may return a single value (which may be a tuple) when it finishes running. The possible types for arguments and return values are described later in the [Encoding](#encoding) section.
Invoking a method involves creating an Application call transaction to specifically call that method. Methods are different from internal subroutines that may exist in a contract, but are not externally callable. Methods may be invoked by a top-level Application call transaction from an off-chain caller, or by an Application call inner transaction created by another Application.
#### Method Signature
[Section titled “Method Signature”](#method-signature)
A method signature is a unique identifier for a method. The signature is a string that consists of the method’s name, an open parenthesis, a comma-separated list of the types of its arguments, a closing parenthesis, and the method’s return type, or `void` if it does not return a value. The names of the arguments **MUST NOT** be included in a method’s signature, and **MUST NOT** contain any whitespace.
For example, `add(uint64,uint64)uint128` is the method signature for a method named `add` which takes two uint64 parameters and returns a uint128. Signatures are encoded in ASCII.
For the benefit of universal interoperability (especially in HLLs), names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`. Names starting with an underscore are reserved and **MUST** only be used as specified in this ARC or future ABI-related ARC.
#### Method Selector
[Section titled “Method Selector”](#method-selector)
Method signatures contain all the information needed to identify a method, however the length of a signature is unbounded. Rather than consume program space with such strings, a method selector is used to identify methods in calls. A method selector is the first four bytes of the SHA-512/256 hash of the method signature.
For example, the method selector for a method named `add` which takes two uint64 parameters and returns a uint128 can be computed as follows:
```plaintext
Method signature: add(uint64,uint64)uint128
SHA-512/256 hash (in hex): 8aa3b61f0f1965c3a1cbfa91d46b24e54c67270184ff89dc114e877b1753254a
Method selector (in hex): 8aa3b61f
```
#### Method Description
[Section titled “Method Description”](#method-description)
A method description provides further information about a method beyond its signature. This description is encoded in JSON and consists of a method’s name, description (optional), arguments (their types, and optional names and descriptions), and return type and optional description for the return type. From this structure, the method’s signature and selector can be calculated. The Algorand SDKs provide convenience functions to calculate signatures and selectors from such JSON files.
These details will enable high-level languages and dapps/wallets to properly encode arguments, call methods, and decode return values. This description can populate UIs in dapps, wallets, and explorers with description of parameters, as well as populate information about methods in IDEs for HLLs.
The JSON structure for such an object is:
```typescript
interface Method {
/** The name of the method */
name: string;
/** Optional, user-friendly description for the method */
desc?: string;
/** The arguments of the method, in order */
args: Array<{
/** The type of the argument */
type: string;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
}>;
/** Information about the method's return value */
returns: {
/** The type of the return value, or "void" to indicate no return value. */
type: string;
/** Optional, user-friendly description for the return value */
desc?: string;
};
}
```
For example:
```json
{
"name": "add",
"desc": "Calculate the sum of two 64-bit integers",
"args": [
{ "type": "uint64", "name": "a", "desc": "The first term to add" },
{ "type": "uint64", "name": "b", "desc": "The second term to add" }
],
"returns": { "type": "uint128", "desc": "The sum of a and b" }
}
```
### Interfaces
[Section titled “Interfaces”](#interfaces)
An Interface is a logically grouped set of methods. All method selectors in an Interface **MUST** be unique. Method names **MAY** not be unique, as long as the corresponding method selectors are different. Method names in Interfaces **MUST NOT** begin with an underscore.
An Algorand Application *implements* an Interface if it supports all of the methods from that Interface. An Application **MAY** implement zero, one, or multiple Interfaces.
Interface designers **SHOULD** try to prevent collisions of method selectors between Interfaces that are likely to be implemented together by the same Application.
> For example, an Interface `Calculator` providing addition and subtraction of integer methods and an Interface `NumberFormatting` providing formatting methods for numbers into strings are likely to be used together. Interface designers should ensure that all the methods in `Calculator` and `NumberFormatting` have distinct method selectors.
#### Interface Description
[Section titled “Interface Description”](#interface-description)
An Interface description is a JSON object containing the JSON descriptions for each of the methods in the Interface.
The JSON structure for such an object is:
```typescript
interface Interface {
/** A user-friendly name for the interface */
name: string;
/** Optional, user-friendly description for the interface */
desc?: string;
/** All of the methods that the interface contains */
methods: Method[];
}
```
Interface names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`. Interface names starting with `ARC` are reserved to interfaces defined in ARC. Interfaces defined in `ARC-XXXX` (where `XXXX` is a 0-padded number) **SHOULD** start with `ARC_XXXX`.
For example:
```json
{
"name": "Calculator",
"desc": "Interface for a basic calculator supporting additions and multiplications",
"methods": [
{
"name": "add",
"desc": "Calculate the sum of two 64-bit integers",
"args": [
{ "type": "uint64", "name": "a", "desc": "The first term to add" },
{ "type": "uint64", "name": "b", "desc": "The second term to add" }
],
"returns": { "type": "uint128", "desc": "The sum of a and b" }
},
{
"name": "multiply",
"desc": "Calculate the product of two 64-bit integers",
"args": [
{ "type": "uint64", "name": "a", "desc": "The first factor to multiply" },
{ "type": "uint64", "name": "b", "desc": "The second factor to multiply" }
],
"returns": { "type": "uint128", "desc": "The product of a and b" }
}
]
}
```
### Contracts
[Section titled “Contracts”](#contracts)
A Contract is a declaration of what an Application implements. It includes the complete list of the methods implemented by the related Application. It is similar to an Interface, but it may include further details about the concrete implementation, as well as implementation-specific methods that do not belong to any Interface. All methods in a Contract **MUST** be unique; specifically, each method **MUST** have a unique method selector.
Method names in Contracts **MAY** begin with underscore, but these names are reserved for use by this ARC and future extensions of this ARC.
#### OnCompletion Actions and Creation
[Section titled “OnCompletion Actions and Creation”](#oncompletion-actions-and-creation)
In addition to the set of methods from the Contract’s definition, a Contract **MAY** allow Application calls with zero arguments, also known as bare Application calls. Since method invocations with zero arguments still encode the method selector as the first Application call argument, bare Application calls are always distinguishable from method invocations.
The primary purpose of bare Application calls is to allow the execution of an OnCompletion (`apan`) action which requires no inputs and has no return value. A Contract **MAY** allow this for all of the OnCompletion actions listed below, for only a subset of them, or for none at all. Great care should be taken when allowing these operations.
Allowed OnCompletion actions:
* 0: NoOp
* 1: OptIn
* 2: CloseOut
* 4: UpdateApplication
* 5: DeleteApplication
Note that OnCompletion action 3, ClearState, is **NOT** allowed to be invoked as a bare Application call.
> While ClearState is a valid OnCompletion action, its behavior differs significantly from the other actions. Namely, an Application running during ClearState which wishes to have any effect on the state of the chain must never fail, since due to the unique behavior about ClearState failure, doing so would revert any effect made by that Application. Because of this, Applications running during ClearState are incentivized to never fail. Accepting any user input, whether that is an ABI method selector, method arguments, or even relying on the absence of Application arguments to indicate a bare Application call, is therefore a dangerous operation, since there is no way to enforce properties or even the existence of data that is supplied by the user.
If a Contract elects to allow bare Application calls for some OnCompletion actions, then that Contract **SHOULD** also allow any of its methods to be called with those OnCompletion actions, as long as this would not cause undesirable or nonsensical behavior.
> The reason for this is because if it’s acceptable to allow an OnCompletion action to take place in isolation inside of a bare Application call, then it’s most likely acceptable to allow the same action to take place at the same time as an ABI method call. And since the latter can be accomplished in just one transaction, it can be more efficient.
If a Contract requires an OnCompletion action to take inputs or to return a value, then the **RECOMMENDED** behavior of the Contract is to not allow bare Application calls for that OnCompletion action. Rather, the Contract should have one or more methods that are meant to be called with the appropriate OnCompletion action set in order to process that action.
A Contract **MUST NOT** allow any of its methods to be called with the ClearState OnCompletion action.
> To reinforce an earlier point, it is unsafe for a ClearState program to read any user input, whether that is a method argument or even relying on a certain method selector to be present. This behavior makes it unsafe to use ABI calling conventions during ClearState.
If an Application is called with greater than zero Application call arguments (i.e. **NOT** a bare Application call) and the OnCompletion action is **NOT** ClearState, the Application **MUST** always treat the first argument as a method selector and invoke the specified method. This behavior **MUST** be followed for all OnCompletion actions, except for ClearState. This applies to Application creation transactions as well, where the supplied Application ID is 0.
Similar to OnCompletion actions, if a Contract requires its creation transaction to take inputs or to return a value, then the **RECOMMENDED** behavior of the Contract should be to not allow bare Application calls for creation. Rather, the Contract should have one or more methods that are meant to be called in order to create the Contract.
#### Contract Description
[Section titled “Contract Description”](#contract-description)
A Contract description is a JSON object containing the JSON descriptions for each of the methods in the Contract.
The JSON structure for such an object is:
```typescript
interface Contract {
/** A user-friendly name for the contract */
name: string;
/** Optional, user-friendly description for the interface */
desc?: string;
/**
* Optional object listing the contract instances across different networks
*/
networks?: {
/**
* The key is the base64 genesis hash of the network, and the value contains
* information about the deployed contract in the network indicated by the
* key
*/
[network: string]: {
/** The app ID of the deployed contract in this network */
appID: number;
}
}
/** All of the methods that the contract implements */
methods: Method[];
}
```
Contract names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`.
The `desc` fields of the Contract and the methods inside the Contract **SHOULD** contain information that is not explicitly encoded in the other fields, such as support of bare Application calls, requirement of specific OnCompletion action for specific methods, and methods to call for creation (if creation cannot be done via a bare Application call).
For example:
```json
{
"name": "Calculator",
"desc": "Contract of a basic calculator supporting additions and multiplications. Implements the Calculator interface.",
"networks": {
"wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=": { "appID": 1234 },
"SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=": { "appID": 5678 },
},
"methods": [
{
"name": "add",
"desc": "Calculate the sum of two 64-bit integers",
"args": [
{ "type": "uint64", "name": "a", "desc": "The first term to add" },
{ "type": "uint64", "name": "b", "desc": "The second term to add" }
],
"returns": { "type": "uint128", "desc": "The sum of a and b" }
},
{
"name": "multiply",
"desc": "Calculate the product of two 64-bit integers",
"args": [
{ "type": "uint64", "name": "a", "desc": "The first factor to multiply" },
{ "type": "uint64", "name": "b", "desc": "The second factor to multiply" }
],
"returns": { "type": "uint128", "desc": "The product of a and b" }
}
]
}
```
### Method Invocation
[Section titled “Method Invocation”](#method-invocation)
In order for a caller to invoke a method, the caller and the method implementation (callee) must agree on how information will be passed to and from the method. This ABI defines a standard for where this information should be stored and for its format.
This standard does not apply to Application calls with the ClearState OnCompletion action, since it is unsafe for ClearState programs to rely on user input.
#### Standard Format
[Section titled “Standard Format”](#standard-format)
The method selector must be the first Application call argument (index 0), accessible as `txna ApplicationArgs 0` from TEAL (except for bare Application calls, which use zero application call arguments).
If a method has 15 or fewer arguments, each argument **MUST** be placed in order in the following Application call argument slots (indexes 1 through 15). The arguments **MUST** be encoded as defined in the [Encoding](#encoding) section.
Otherwise, if a method has 16 or more arguments, the first 14 **MUST** be placed in order in the following Application call argument slots (indexes 1 through 14), and the remaining arguments **MUST** be encoded as a tuple in the final Application call argument slot (index 15). The arguments must be encoded as defined in the [Encoding](#encoding) section.
If a method has a non-void return type, then the return value of the method **MUST** be located in the final logged value of the method’s execution, using the `log` opcode. The logged value **MUST** contain a specific 4 byte prefix, followed by the encoding of the return value as defined in the [Encoding](#encoding) section. The 4 byte prefix is defined as the first 4 bytes of the SHA-512/256 hash of the ASCII string `return`. In hex, this is `151f7c75`.
> For example, if the method `add(uint64,uint64)uint128` wanted to return the value 4160, it would log the byte array `151f7c7500000000000000000000000000001040` (shown in hex).
#### Implementing a Method
[Section titled “Implementing a Method”](#implementing-a-method)
An ARC-4 Application implementing a method:
1. **MUST** check if `txn NumAppArgs` equals 0. If true, then this is a bare Application call. If the Contract supports bare Application calls for the current transaction parameters (it **SHOULD** check the OnCompletion action and whether the transaction is creating the application), it **MUST** handle the call appropriately and either approve or reject the transaction. The following steps **MUST** be ignored in this case. Otherwise, if the Contract does not support this bare application call, the Contract **MUST** reject the transaction.
2. **MUST** examine `txna ApplicationArgs 0` to identify the selector of the method being invoked. If the contract does not implement a method with that selector, the Contract **MUST** reject the transaction.
3. **MUST** execute the actions required to implement the method being invoked. In general, this works by branching to the body of the method indicated by the selector.
4. The code for that method **MAY** extract the arguments it needs, if any, from the application call arguments as described in the [Encoding](#encoding) section. If the method has more than 15 arguments and the contract needs to extract an argument beyond the 14th, it **MUST** decode `txna ApplicationArgs 15` as a tuple to access the arguments contained in it.
5. If the method is non-void, the Application **MUST** encode the return value as described in the [Encoding](#encoding) section and then `log` it with the prefix `151f7c75`. Other values **MAY** be logged before the return value, but other values **MUST NOT** be logged after the return value.
#### Calling a Method from Off-Chain
[Section titled “Calling a Method from Off-Chain”](#calling-a-method-from-off-chain)
To invoke an ARC-4 Application, an off-chain system, such as a dapp or wallet, would first obtain the Interface or Contract description JSON object for the app. The client may now:
1. Create an Application call transaction with the following parameters:
1. Use the ID of the desired Application whose program code implements the method being invoked, or 0 if they wish to create the Application.
2. Use the selector of the method being invoked as the first Application call argument.
3. Encode all arguments for the method, if any, as described in the [Encoding](#encoding) section. If the method has more than 15 arguments, encode all arguments beyond (but not including) the 14th as a tuple into the final Application call argument.
2. Submit this transaction and wait until it successfully commits to the blockchain.
3. Decode the return value, if any, from the ApplyData’s log information.
Clients **MAY** ignore the return value.
An exception to the above instructions is if the app supports bare Application calls for some transaction parameters, and the client wishes to invoke this functionality. Then the client may simply create and submit to the network an Application call transaction with the ID of the Application (or 0 if they wish to create the application) and the desired OnCompletion value set. Application arguments **MUST NOT** be present.
### Encoding
[Section titled “Encoding”](#encoding)
This section describes how ABI types can be represented as byte strings.
Like the [EthereumABI](https://docs.soliditylang.org/en/v0.8.6/abi-spec.html), this encoding specification is designed to have the following two properties:
1. The number of non-sequential “reads” necessary to access a value is at most the depth of that value inside the encoded array structure. For example, at most 4 reads are needed to retrieve a value at `a[i][k][l][r]`.
2. The encoding of a value or array element is not interleaved with other data and it is relocatable, i.e. only relative “addresses” (indexes to other parts of the encoding) are used.
#### Types
[Section titled “Types”](#types)
The following types are supported in the Algorand ABI.
* `uint`: An `N`-bit unsigned integer, where `8 <= N <= 512` and `N % 8 = 0`. When this type is used as part of a method signature, `N` must be written as a base 10 number without any leading zeros.
* `byte`: An alias for `uint8`.
* `bool`: A boolean value that is restricted to either 0 or 1. When encoded, up to 8 consecutive `bool` values will be packed into a single byte.
* `ufixedx`: An `N`-bit unsigned fixed-point decimal number with precision `M`, where `8 <= N <= 512`, `N % 8 = 0`, and `0 < M <= 160`, which denotes a value `v` as `v / (10^M)`. When this type is used as part of a method signature, `N` and `M` must be written as base 10 numbers without any leading zeros.
* `[]`: A fixed-length array of length `N`, where `N >= 0`. `type` can be any other type. When this type is used as part of a method signature, `N` must be written as a base 10 number without any leading zeros, *unless* `N` is zero, in which case only a single 0 character should be used.
* `address`: Used to represent a 32-byte Algorand address. This is equivalent to `byte[32]`.
* `[]`: A variable-length array. `type` can be any other type.
* `string`: A variable-length byte array (`byte[]`) assumed to contain UTF-8 encoded content.
* `(T1,T2,…,TN)`: A tuple of the types `T1`, `T2`, …, `TN`, `N >= 0`.
* reference types `account`, `asset`, `application`: **MUST NOT** be used as the return type. For encoding purposes they are an alias for `uint8`. See section “Reference Types” below.
Additional special use types are defined in [Reference Types](#reference-types) and [Transaction Types](#transaction-types).
#### Static vs Dynamic Types
[Section titled “Static vs Dynamic Types”](#static-vs-dynamic-types)
For encoding purposes, the types are divided into two categories: static and dynamic.
The dynamic types are:
* `[]` for any `type`
* This includes `string` since it is an alias for `byte[]`.
* `[]` for any dynamic `type`
* `(T1,T2,...,TN)` if `Ti` is dynamic for some `1 <= i <= N`
All other types are static. For a static type, all encoded values of that type have the same length, irrespective of their actual value.
#### Encoding Rules
[Section titled “Encoding Rules”](#encoding-rules)
Let `len(a)` be the number of bytes in the binary string `a`. The returned value shall be considered to have the ABI type `uint16`.
Let `enc` be a mapping from values of the ABI types to binary strings. This mapping defines the encoding of the ABI.
For any ABI value `x`, we recursively define `enc(x)` to be as follows:
* If `x` is a tuple of `N` types, `(T1,T2,...,TN)`, where `x[i]` is the value at index `i`, starting at 1:
* `enc(x) = head(x[1]) ... head(x[N]) tail(x[1]) ... tail(x[N])`
* Let `head` and `tail` be mappings from values in this tuple to binary strings. For each `i` such that `1 <= i <= N`, these mappings are defined as:
* If `Ti` (the type of `x[i]`) is static:
* If `Ti` is `bool`:
* Let `after` be the largest integer such that all `T(i+j)` are `bool`, for `0 <= j <= after`.
* Let `before` be the largest integer such that all `T(i-j)` are `bool`, for `0 <= j <= before`.
* If `before % 8 == 0`:
* `head(x[i]) = enc(x[i]) | (enc(x[i+1]) >> 1) | ... | (enc(x[i + min(after,7)]) >> min(after,7))`, where `>>` is bitwise right shift which pads with 0, `|` is bitwise or, and `min(x,y)` returns the minimum value of the integers `x` and `y`.
* `tail(x[i]) = ""` (the empty string)
* Otherwise:
* `head(x[i]) = ""` (the empty string)
* `tail(x[i]) = ""` (the empty string)
* Otherwise:
* `head(x[i]) = enc(x[i])`
* `tail(x[i]) = ""` (the empty string)
* Otherwise:
* `head(x[i]) = enc(len( head(x[1]) ... head(x[N]) tail(x[1]) ... tail(x[i-1]) ))`
* `tail(x[i]) = enc(x[i])`
* If `x` is a fixed-length array `T[N]`:
* `enc(x) = enc((x[0], ..., x[N-1]))`, i.e. it’s encoded as if it were an `N` element tuple where every element is type `T`.
* If `x` is a variable-length array `T[]` with `k` elements:
* `enc(x) = enc(k) enc([x[0], ..., x[k-1]])`, i.e. it’s encoded as if it were a fixed-length array of `k` elements, prefixed with its length, `k` encoded as a `uint16`.
* If `x` is an `N`-bit unsigned integer, `uint`:
* `enc(x)` is the `N`-bit big-endian encoding of `x`.
* If `x` is an `N`-bit unsigned fixed-point decimal number with precision `M`, `ufixedx`:
* `enc(x) = enc(x * 10^M)`, where `x * 10^M` is interpreted as a `uint`.
* If `x` is a boolean value `bool`:
* `enc(x)` is a single byte whose **most significant bit** is either 1 or 0, if `x` is true or false respectively. All other bits are 0. Note: this means that a value of true will be encoded as `0x80` (`10000000` in binary) and a value of false will be encoded as `0x00`. This is in contrast to most other encoding schemes, where a value of true is encoded as `0x01`.
Other aliased types’ encodings are already covered:
* `string` and `address` are aliases for `byte[]` and `byte[32]` respectively
* `byte` is an alias for `uint8`
* each of the reference types is an alias for `uint8`
#### Reference Types
[Section titled “Reference Types”](#reference-types)
Three special types are supported *only* as the type of an argument. They *can* be embedded in arrays and tuples.
* `account` represents an Algorand account, stored in the Accounts (`apat`) array
* `asset` represents an Algorand Standard Asset (ASA), stored in the Foreign Assets (`apas`) array
* `application` represents an Algorand Application, stored in the Foreign Apps (`apfa`) array
Some AVM opcodes require specific values to be placed in the “foreign arrays” of the Application call transaction. These three types allow methods to describe these requirements. To encode method calls that have these types as arguments, the value in question is placed in the Accounts (`apat`), Foreign Assets (`apas`), or Foreign Apps (`apfa`) arrays, respectively, and a `uint8` containing the index of the value in the appropriate array is encoded in the normal location for this argument.
Note that the Accounts and Foreign Apps arrays have an implicit value at index 0, the Sender of the transaction or the called Application, respectively. Therefore, indexes of any additional values begin at 1. Additionally, for efficiency, callers of a method that wish to pass the transaction Sender as an `account` value or the called Application as an `application` value **SHOULD** use 0 as the index of these values and not explicitly add them to Accounts or Foreign Apps arrays.
When passing addresses, ASAs, or apps that are *not* required to be accessed by such opcodes, ARC-4 Contracts **SHOULD** use the base types for passing these types: `address` for accounts and `uint64` for asset or Application IDs.
#### Transaction Types
[Section titled “Transaction Types”](#transaction-types)
Some apps require that they are invoked as part of a larger transaction group, containing specific additional transactions. Seven additional special types are supported (only) as argument types to describe such requirements.
* `txn` represents any Algorand transaction
* `pay` represents a PaymentTransaction (algo transfer)
* `keyreg` represents a KeyRegistration transaction (configure consensus participation)
* `acfg` represent a AssetConfig transaction (create, configure, or destroy ASAs)
* `axfer` represents an AssetTransfer transaction (ASA transfer)
* `afrz` represents an AssetFreezeTx transaction (freeze or unfreeze ASAs)
* `appl` represents an ApplicationCallTx transaction (create/invoke a Application)
Arguments of these types are encoded as consecutive transactions in the same transaction group as the Application call, placed in the position immediately preceding the Application call. Unlike “foreign” references, these special types are not encoded in ApplicationArgs as small integers “pointing” to the associated object. In fact, they occupy no space at all in the Application Call transaction itself. Allowing explicit references would create opportunities for multiple transaction “values” to point to the same transaction in the group, which is undesirable. Instead, the locations of the transactions are implied entirely by the placement of the transaction types in the argument list.
For example, to invoke the method `deposit(string,axfer,pay,uint32)void`, a client would create a transaction group containing, in this order:
1. an asset transfer
2. a payment
3. the actual Application call
When encoding the other (non-transaction) arguments, the client **MUST** act as if the transaction arguments were completely absent from the method signature. The Application call would contain the method selector in ApplicationArgs\[0], the first (string) argument in ApplicationArgs\[1], and the fourth (uint32) argument in ApplicationArgs\[2].
ARC-4 Applications **SHOULD** be constructed to allow their invocations to be combined with other contract invocations in a single atomic group if they can do so safely. For example, they **SHOULD** use `gtxns` to examine the previous index in the group for a required `pay` transaction, rather than hardcode an index with `gtxn`.
In general, an ARC-4 Application method with `n` transactions as arguments **SHOULD** only inspect the `n` previous transactions. In particular, it **SHOULD NOT** inspect transactions after and it **SHOULD NOT** check the size of a transaction group (if this can be done safely). In addition, a given method **SHOULD** always expect the same number of transactions before itself. For example, the method `deposit(string,axfer,pay,uint32)void` is always preceded by two transactions. It is never the case that it can be called only with one asset transfer but no payment transfer.
> The reason for the above recommendation is to provide minimal composability support while preventing obvious dangerous attacks. For example, if some apps expect payment transactions after them while other expect payment transaction before them, then the same payment may be counted twice.
## Rationale
[Section titled “Rationale”](#rationale)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Wallet Transaction Signing API (Functional)
> An API for a function used to sign a list of transactions.
> This ARC is intended to be completely compatible with [ARC-1](/arc-standards/arc-0001).
## Abstract
[Section titled “Abstract”](#abstract)
ARC-1 defines a standard for signing transactions with security in mind. This proposal is a strict subset of ARC-1 that outlines only the minimum functionality required in order to be useable.
Wallets that conform to ARC-1 already conform to this API.
Wallets conforming to [ARC-5](/arc-standards/arc-0005) but not ARC-1 **MUST** only be used for testing purposes and **MUST NOT** used on MainNet. This is because this ARC-5 does not provide the same security guarantees as ARC-1 to protect properly wallet users.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Interface `SignTxnsFunction`
[Section titled “Interface SignTxnsFunction”](#interface-signtxnsfunction)
Signatures are requested by calling a function `signTxns(txns)` on a list `txns` of transactions. The dApp may also provide an optional parameter `opts`.
A wallet transaction signing function `signTxns` is defined by the following interface:
```ts
export type SignTxnsFunction = (
txns: WalletTransaction[],
opts?: SignTxnsOpts,
)
=> Promise<(SignedTxnStr | null)[]>;
```
* `SignTxnsOpts` is as specified by [ARC-1](/arc-standards/arc-0001#interface-signtxnsopts).
* `SignedTxnStr` is as specified by [ARC-1](/arc-standards/arc-0001#interface-signedtxnstr).
A `SignTxnsFunction`:
* expects `txns` to be in the correct format as specified by `WalletTransaction`.
### Interface `WalletTransaction`
[Section titled “Interface WalletTransaction”](#interface-wallettransaction)
```ts
export interface WalletTransaction {
/**
* Base64 encoding of the canonical msgpack encoding of a Transaction.
*/
txn: string;
}
```
### Semantic requirements
[Section titled “Semantic requirements”](#semantic-requirements)
* The call `signTxns(txns, opts)` **MUST** either throw an error or return an array `ret` of the same length as the `txns` array.
* Each element of `ret` **MUST** be a valid `SignedTxnStr` with the underlying transaction exactly matching `txns[i].txn`.
This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”.
`signTxns` **SHOULD** follow the error standard specified in [ARC-0001](/arc-standards/arc-0001#error-standards).
### UI requirements
[Section titled “UI requirements”](#ui-requirements)
Wallets satisfying this ARC but not [ARC-0001](/arc-standards/arc-0001) **MUST** clearly display a warning to the user that they **MUST** not be used with real funds on MainNet.
## Rationale
[Section titled “Rationale”](#rationale)
This simplified version of ARC-0001 exists for two main reasons:
1. To outline the minimum amount of functionality needed in order to be useful.
2. To serve as a stepping stone towards full ARC-0001 compatibility.
While this ARC **MUST** not be used by users with real funds on MainNet for security reasons, this simplified API sets a lower bar and acts as a signpost for which wallets can even be used at all.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Address Discovery API
> API function, enable, which allows the discovery of accounts
## Abstract
[Section titled “Abstract”](#abstract)
A function, `enable`, which allows the discovery of accounts. Optional functions, `enableNetwork` and `enableAccounts`, which handle the multiple capabilities of `enable` separately. This document requires nothing else, but further semantic meaning is prescribed to these functions in [ARC-0010](/arc-standards/arc-0010#semantic-requirements) which builds off of this one and a few others. The caller of this function is usually a dApp.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Interface `EnableFunction`
[Section titled “Interface EnableFunction”](#interface-enablefunction)
```ts
export type AlgorandAddress = string;
export type GenesisHash = string;
export type EnableNetworkFunction = (
opts?: EnableNetworkOpts
) => Promise;
export type EnableAccountsFunction = (
opts?: EnableAccountsOpts
) => Promise;
export type EnableFunction = (
opts?: EnableOpts
) => Promise;
export type EnableOpts = (
EnableNetworkOpts & EnableAccountsOpts
);
export interface EnableNetworkOpts {
genesisID?: string;
genesisHash?: GenesisHash;
};
export interface EnableAccountsOpts {
accounts?: AlgorandAddress[];
};
export type EnableResult = (
EnableNetworkResult & EnableAccountsResult
);
export interface EnableNetworkResult {
genesisID: string;
genesisHash: GenesisHash;
}
export interface EnableAccountsResult {
accounts: AlgorandAddress[];
}
export interface EnableError extends Error {
code: number;
data?: any;
}
```
An `EnableFunction` with optional input argument `opts:EnableOpts` **MUST** return a value `ret:EnableResult` or **MUST** throw an exception object of type `EnableError`.
#### String specification: `GenesisID` and `GenesisHash`
[Section titled “String specification: GenesisID and GenesisHash”](#string-specification-genesisid-and-genesishash)
A `GenesisID` is an ascii string
A `GenesisHash` is base64 string representing a 32-byte genesis hash.
#### String specification: `AlgorandAddress`
[Section titled “String specification: AlgorandAddress”](#string-specification-algorandaddress)
Defined as in [ARC-0001](/arc-standards/arc-0001#interface-algorandaddress):
> An Algorand address is represented by a 58-character base32 string. It includes includes the checksum.
#### Error Standards
[Section titled “Error Standards”](#error-standards)
`EnableError` follows the same rules as `SignTxnsError` from [ARC-0001](/arc-standards/arc-0001#error-interface-signtxnserror) and uses the same status error codes.
### Interface `WalletAccountManager`
[Section titled “Interface WalletAccountManager”](#interface-walletaccountmanager)
```ts
export interface WalletAccountManager {
switchAccount: (addr: AlgorandAddress) => Promise
switchNetwork: (genesisID: string) => Promise
onAccountSwitch: (hook: (addr: AlgorandAddress) => void)
onNetworkSwitch: (hook: (genesisID: string, genesisHash: GenesisHash) => void)
}
```
Wallets SHOULD expose `switchAccount` function to allow an app to switch an account to another one managed by the wallet. The `switchAccount` function should return a promise which will be fulfilled when the wallet will effectively switch an account. The function must thrown an `Error` exception when the wallet can’t execute the switch (for example, the provided address is not managed by the wallet or when the address is not a valid Algorand address).
Similarly, wallets SHOULD expose `switchNetwork` function to instrument a wallet to switch to another network. The function must thrown an `Error` exception when the wallet can’t execute the switch (for example, when the provided genesis ID is not recognized by the wallet).
Very often, webapp have their own state with information about the user (provided by the account address) and a network. For example, a webapp can list all compatible Smart Contracts for a given network. For descent integration with a wallet, we must be able to react in a webapp on the account and network switch from the wallet interface. For that we define 2 functions which MUST be exposed by wallets: `onAccountSwitch` and `onNetworkSwitch`. These function will register a hook and will call it whenever a user switches respectively an account or network from the wallet interface.
### Semantic requirements
[Section titled “Semantic requirements”](#semantic-requirements)
This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”.
#### First call to `enable`
[Section titled “First call to enable”](#first-call-to-enable)
Regarding a first call by a caller to `enable(opts)` or `enable()` (where `opts` is `undefined`), with potential promised return value `ret`:
When `genesisID` and/or `genesisHash` is specified in `opts`:
* The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.genesisID` and `ret.genesisHash` match `opts.genesisID` and `opts.genesisHash` (i.e., `ret.genesisID` is identical to `opts.genesisID` if `opts.genesisID` is specified, and `ret.genesisHash` is identical to `opts.genesisHash` if `opts.genesisHash` is specified).
* The user **SHOULD** be prompted for permission to acknowledge control of accounts on that specific network (defined by `ret.genesisID` and `ret.genesisHash`).
* In the case only `opts.genesisID` is provided, several networks may match this ID and the user **SHOULD** be prompted to select the network they wish to use.
When neither `genesisID` nor `genesisHash` is specified in `opts`:
* The user **SHOULD** be prompted to select the network they wish to use.
* The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.genesisID` and `ret.genesisHash` **SHOULD** represent the user’s selection of network.
* The function **MAY** throw an error if it does not support user selection of network.
When `accounts` is specified in `opts`:
* The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.accounts` is an array that starts with all the same elements as `opts.accounts`, in the same order.
* The user **SHOULD** be prompted for permission to acknowledge their control of the specified accounts. The wallet **MAY** allow the user to provide more accounts than those listed. The wallet **MAY** allow the user to select fewer accounts than those listed, in which the wallet **MUST** return an error which **SHOULD** be a user rejected error and contain the rejected accounts in `data.accounts`.
When `accounts` is not specified in `opts`:
* The user **SHOULD** be prompted to select the accounts they wish to reveal on the selected network.
* The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.accounts` is a empty or non-empty array.
* If `ret.accounts` is not empty, the caller **MAY** assume that `ret.accounts[0]` is the user’s “currently-selected” or “default” account, for DApps that only require access to one account.
> Empty `ret.accounts` array are used to allow a DApp to get access to an Algorand node but not to signing capabilities.
#### Network
[Section titled “Network”](#network)
In addition to the above rules, in all cases, if `ret.genesisID` is one of the official network `mainnet-v1.0`, `testnet-v1.0`, or `betanet-v1.0`, `ret.genesisHash` **MUST** match the genesis hash of those networks
| Genesis ID | Genesis Hash |
| -------------- | ---------------------------------------------- |
| `mainnet-v1.0` | `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=` |
| `testnet-v1.0` | `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=` |
| `betanet-v1.0` | `mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=` |
When using a genesis ID that is not one of the above, the caller **SHOULD** always provide a `genesisHash`. This is because a `genesisID` does not uniquely define a network in that case. If a caller does not provide a `genesisHash`, multiple calls to `enable` may return a different network with the same `genesisID` but a different `genesisHash`.
#### Identification of the caller
[Section titled “Identification of the caller”](#identification-of-the-caller)
The `enable` function **MAY** remember the choices of the user made by a specific caller and use them everytime the same caller calls the function. The function **MUST** ensure that the caller can be securely identified. In particular, by default, the function **MUST NOT** allow webapps on the http protocol to call it, as such webapps can easily be modified by a man-in-the-middle attacker. In the case of callers that are https websites, the caller **SHOULD** be identified by its fully qualified domain name.
The function **MAY** offer the user some “developer mode” or “advanced” options to allow calls from insecure dApps. In that case, the fact that the caller is insecure and/or the fact that the wallet in “developer mode” **MUST** be clearly displayed by the wallet.
#### Multiple calls to `enable`
[Section titled “Multiple calls to enable”](#multiple-calls-to-enable)
The same caller **MAY** call multiple time the `enable` function. When the caller is a dApp, every time a dApp is refreshed, it actually **SHOULD** call the `enable()` function.
The `enable` function **MAY NOT** return the same value every time it is called, even when called with the exact same argument `opts`. The caller **MUST NOT** assume that the `enable` function will always return the same value, and **MUST** properly handle changes of available accounts and/or changes of network.
For example, a user may want to change network or accounts for a dApp. That is why, upon refresh, the dApp **SHOULD** automatically switch network and perform all required changes. Examples of required changes include but are not limited to change of the list of accounts, change of statuses of the account (e.g., opted in or not), change of the balances of the accounts.
### `enableNetwork` and `enableAccounts`
[Section titled “enableNetwork and enableAccounts”](#enablenetwork-and-enableaccounts)
It may be desirable for a dapp to perform network queries prior to requesting that the user enable an account for use with the dapp. Wallets may provide the functionality of `enable` in two parts: `enableNetwork` for network discovery, and `enableAccounts` for account discovery, which together are the equivalent of calling `enable`.
## Rationale
[Section titled “Rationale”](#rationale)
This API puts power in the user’s hands to choose a preferred network and account to use when interacting with a dApp.
It also allows dApp developers to suggest a specific network, or specific accounts, as appropriate. The user still maintains the ability to reject the dApp’s suggestions, which corresponds to rejecting the promise returned by `enable()`.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Post Transactions API
> API function to Post Signed Transactions to the network.
## Abstract
[Section titled “Abstract”](#abstract)
A function, `postTxns`, which accepts an array of `SignedTransaction`s, and posts them to the network.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”.
### Interface `PostTxnsFunction`
[Section titled “Interface PostTxnsFunction”](#interface-posttxnsfunction)
```ts
export type TxnID = string;
export type SignedTxnStr = string;
export type PostTxnsFunction = (
stxns: SignedTxnStr[],
) => Promise;
export interface PostTxnsResult {
txnIDs: TxnID[];
}
export interface PostTxnsError extends Error {
code: number;
data?: any;
successTxnIDs: (TxnID | null)[];
}
```
A `PostTxnsFunction` with input argument `stxns:string[]` and promised return value `ret:PostTxnsResult`:
* expects `stxns` to be in the correct string format as specified by `SignedTxnStr` (defined below).
* **MUST**, if successful, return an object `ret` such that `ret.txID` is in the correct string format as specified by `TxID`.
> The use of `txID` instead of `txnID` is to follow the standard name for the transaction ID.
### String specification: `SignedTxnStr`
[Section titled “String specification: SignedTxnStr”](#string-specification-signedtxnstr)
Defined as in [ARC-0001](/arc-standards/arc-0001#interface-signedtxnstr):
> \[`SignedTxnStr` is] the base64 encoding of the canonical msgpack encoding of the `SignedTxn` corresponding object, as defined in the [Algorand specs](https://github.com/algorandfoundation/specs).
### String specification: `TxnID`
[Section titled “String specification: TxnID”](#string-specification-txnid)
A `TxnID` is a 52-character base32 string (without padding) corresponding to a 32-byte string. For example: `H2KKVITXKWL2VBZBWNHSYNU3DBLYBXQAVPFPXBCJ6ZZDVXQPSRTQ`.
### Error standard
[Section titled “Error standard”](#error-standard)
`PostTxnsError` follows the same rules as `SignTxnsError` from [ARC-0001](/arc-standards/arc-0001#error-interface-signtxnserror) and uses the same status codes as well as the following status codes:
| Status Code | Name | Description |
| ----------- | --------------------------------- | ----------------------------------------- |
| 4400 | Failure Sending Some Transactions | Some transactions were not sent properly. |
### Semantic requirements
[Section titled “Semantic requirements”](#semantic-requirements)
Regarding a call to `postTxns(stxns)` with promised return value `ret`:
* `postTxns` **MAY** assume that `stxns` is an array of valid `SignedTxnStr` strings that represent correctly signed transactions such that:
* Either all transaction belong to the same group of transactions and are in the correct order. In other words, either `stxns` is an array of a single transaction with a zero group ID (`txn.Group`), or `stxns` is an array of one or more transactions with the *same* non-zero group ID. The function **MUST** reject if the transactions do not match their group ID. (The caller must provide the transactions in the order defined by the group ID.)
> An early draft of this ARC required that the size of a group of transactions must be greater than 1 but, since the Algorand protocol supports groups of size 1, this requirement had been changed so dApps don’t have to have special cases for single transactions and can always send a group to the wallet.
* Or `stxns` is a concatenation of arrays satisfying the above.
* `postTxns` **MUST** attempt to post all transactions together. With the `algod` v2 API, this implies splitting the transactions into groups and making an API call per transaction group. `postTxns` **SHOULD NOT** wait after each transaction group but post all of them without pause in-between.
* `postTxns` **MAY** ask the user whether they approve posting those transactions.
> A dApp can always post transactions itself without the help of `postTxns` when a public network is used. However, when a private network is used, a dApp may need `postTxns`, and in this case, asking the user’s approval can make sense. Another such use case is when the user uses a specific trusted node that has some legal restrictions.
* `postTxns` **MUST** wait for confirmation that the transactions are finalized.
> TODO: Decide whether to add an optional flag to not wait for that.
* If successful, `postTxns` **MUST** resolve the returned promise with the list of transaction IDs `txnIDs` of the posted transactions `stxn`.
* If unsuccessful, `postTxns` **MUST** reject the promise with an error `err` of type `PostTxnsError` such that:
* `err.code=4400` if there was a failure sending the transactions or a code as specified in [ARC-0001](/arc-standards/arc-0001#error-standards) if the user or function disallowed posting the transactions.
* `err.message` **SHOULD** describe what went wrong in as much detail as possible.
* `err.successTxnIDs` **MUST** be an array such that `err.successTxnID[i]` is the transaction ID of `stxns[i]` if `stxns[i]` was successfully committed to the blockchain, and `null` otherwise.
### Security considerations
[Section titled “Security considerations”](#security-considerations)
In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp.
> Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the Node.JS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers.
## Rationale
[Section titled “Rationale”](#rationale)
This API allows DApps to use a user’s preferred connection in order to submit transactions to the network.
The user may wish to use a specific trusted node, or a particular paid service with their own secret token. This API protects the user’s secrets by not exposing connection details to the DApp.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations-1)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Sign and Post API
> A function used to simultaneously sign and post transactions to the network.
## Abstract
[Section titled “Abstract”](#abstract)
A function `signAndPostTxns`, which accepts an array of `WalletTransaction`s, and posts them to the network.
Accepts the inputs to [ARC-0001](/arc-standards/arc-0001#interface-signtxnsfunction)’s / [ARC-0005](/arc-standards/arc-0005#interface-signtxnsfunction)’s `signTxns`, and produces the output of [ARC-0007](/arc-standards/arc-0007#interface-posttxnsfunction)’s `postTxns`.
## Specification
[Section titled “Specification”](#specification)
### Interface `SignAndPostTxnsFunction`
[Section titled “Interface SignAndPostTxnsFunction”](#interface-signandposttxnsfunction)
```ts
export type SignAndPostTxnsFunction = (
txns: WalletTransaction[],
opts?: any,
) => Promise;
```
* `WalletTransaction` is as specified by [ARC-0005](/arc-standards/arc-0005#interface-wallettransaction).
* `PostTxnsResult` is as specified by [ARC-0007](/arc-standards/arc-0007#interface-posttxnsfunction).
Errors are handled exactly as specified by [ARC-0001](/arc-standards/arc-0001#error-standards) and [ARC-0007](/arc-standards/arc-0007#error-standard)
## Rationale
[Section titled “Rationale”](#rationale)
Allows the user to be sure that what they are signing is in fact all that is being sent. Doesn’t necessarily grant the DApp direct access to the signed txns, though they are posted to the network, so they should not be considered private.
Exposing only this API instead of exposing `postTxns` directly is potentially safer for the wallet user, since it only allows the posting of transactions which the user has explicitly approved.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp.
> Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the nodeJS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers.
For dApps using the `signAndPostTxns` function, it is **RECOMMENDED** to display a Waiting/Loading Screen to wait until the transaction is confirmed to prevent potential issues.
> The reasoning is the following: the pop-up/window in which the wallet is showing the waiting/loading screen may disappear in some cases (e.g., if the user clicks away from it). If it disappears, the user may be tempted to perform again the action, causing significant damages.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Algodv2 and Indexer API
> An API for accessing Algod and Indexer through a user's preferred connection.
## Abstract
[Section titled “Abstract”](#abstract)
Functions `getAlgodv2Client` and `getIndexerClient` which return a `BaseHTTPClient` that can be used to construct an `Algodv2Client` and an `IndexerClient` respectively (from the [JS SDK](https://github.com/algorand/js-algorand-sdk/blob/develop/src/client/baseHTTPClient.ts));
## Specification
[Section titled “Specification”](#specification)
### Interface `GetAlgodv2ClientFunction`
[Section titled “Interface GetAlgodv2ClientFunction”](#interface-getalgodv2clientfunction)
```ts
type GetAlgodv2ClientFunction = () => Promise
```
Returns a promised `BaseHTTPClient` that can be used to then build an `Algodv2Client`, where `BaseHTTPClient` is an interface matching the interface `algosdk.BaseHTTPClient` from the [JS SDK](https://github.com/algorand/js-algorand-sdk/blob/develop/src/client/baseHTTPClient.ts)).
### Interface `GetIndexerClientFunction`
[Section titled “Interface GetIndexerClientFunction”](#interface-getindexerclientfunction)
```ts
type GetIndexerClientFunction = () => Promise
```
Returns a promised `BaseHTTPClient` that can be used to then build an `Indexer`, where `BaseHTTPClient` is an interface matching the interface `algosdk.BaseHTTPClient` from the [JS SDK](https://github.com/algorand/js-algorand-sdk/blob/develop/src/client/baseHTTPClient.ts)).
### Security considerations
[Section titled “Security considerations”](#security-considerations)
The returned `BaseHTTPClient` **SHOULD** filter the queries made to prevent potential attacks and reject (i.e., throw an exception) if this is not satisfied. A non-exhaustive list of checks is provided below:
* Check that the relative PATH does not contain `..`.
* Check that the only provided headers are the ones used by the SDK (when this ARC was written: `accept` and `content-type`) and their values are the ones provided by the SDK.
`BaseHTTPClient` **MAY** impose rate limits.
For higher security, `BaseHTTPClient` **MAY** also check the queries with regards to the OpenAPI specification of the node and the indexer.
In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp.
> Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the nodeJS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers.
## Rationale
[Section titled “Rationale”](#rationale)
Nontrivial dApps often require the ability to query the network for activity. Algorand dApps written without regard to wallets are likely written using `Algodv2` and `Indexer` from `algosdk`. This document allows dApps to instantiate `Algodv2` and `Indexer` for a wallet API service, making it easy for JavaScript dApp authors to port their code to work with wallets.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations-1)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Reach Minimum Requirements
> Minimum requirements for Reach to function with a given wallet.
## Abstract
[Section titled “Abstract”](#abstract)
An amalgamation of APIs which comprise the minimum requirements for Reach to be able to function correctly with a given wallet.
## Specification
[Section titled “Specification”](#specification)
A group of related functions:
* `enable` (**REQUIRED**)
* `enableNetwork` (**OPTIONAL**)
* `enableAccounts` (**OPTIONAL**)
* `signAndPostTxns` (**REQUIRED**)
* `getAlgodv2Client` (**REQUIRED**)
* `getIndexerClient` (**REQUIRED**)
* `signTxns` (**OPTIONAL**)
* `postTxns` (**OPTIONAL**)
* `enable`: as specified in [ARC-0006](/arc-standards/arc-0006#interface-enablefunction).
* `signAndPostTxns`: as specified in [ARC-0008](/arc-standards/arc-0008#interface-signandposttxnsfunction).
* `getAlgodv2Client` and `getIndexerClient`: as specified in [ARC-0009](/arc-standards/arc-0009#specification).
* `signTxns`: as specified in [ARC-0005](/arc-standards/arc-0005#interface-signtxnsfunction) / [ARC-0001](/arc-standards/arc-0001#interface-signtxnsfunction).
* `postTxns`: as specified in [ARC-0007](/arc-standards/arc-0007#interface-posttxnsfunction).
There are additional semantics for using these functions together.
### Semantic Requirements
[Section titled “Semantic Requirements”](#semantic-requirements)
* `enable` **SHOULD** be called before calling the other functions and upon refresh of the dApp.
* Calling `enableNetwork` and then `enableAccounts` **MUST** be equivalent to calling `enable`.
* If used instead of `enable`: `enableNetwork` **SHOULD** be called before `enableAccounts` and `getIndexerClient`. Both `enableNetwork` and `enableAccounts` **SHOULD** be called before the other functions.
* If `signAndPostTxns`, `getAlgodv2Client`, `getIndexerClient`, `signTxns`, or `postTxns` are called before `enable` (or `enableAccounts`), they **SHOULD** throw an error object with property `code=4202`. (See Error Standards in [ARC-0001](/arc-standards/arc-0001#error-standards)).
* `getAlgodv2Client` and `getIndexerClient` **MUST** return connections to the network indicated by the `network` result of `enable`.
* `signAndPostTxns` **MUST** post transactions to the network indicated by the `network` result of `enable`
* The result of `getAlgodv2Client` **SHOULD** only be used to query the network. `postTxns` (if available) and `signAndPostTxns` **SHOULD** be used to send transactions to the network. The `Algodv2Client` object **MAY** be modified to throw exceptions if the caller tries to use it to post transactions.
* `signTxns` and `postTxns` **MAY** or **MAY NOT** be provided. When one is provided, they both **MUST** be provided. In addition, `signTxns` **MAY** display a warning that the transactions are returned to the dApp rather than posted directly to the blockchain.
### Additional requirements regarding LogicSigs
[Section titled “Additional requirements regarding LogicSigs”](#additional-requirements-regarding-logicsigs)
`signAndPostTxns` must also be able to handle logic sigs, and more generally transactions signed by the DApp itself. In case of logic sigs, callers are expected to sign the logic sig by themselves, rather than expecting the wallet to do so on their behalf. To handle these cases, we adopt and extend the [ARC-0001](/arc-standards/arc-0001#interface-wallettransaction) format for `WalletTransaction`s that do not need to be signed:
```json
{
"txn": "...",
"signers": [],
"stxn": "..."
}
```
* `stxn` is a `SignedTxnStr`, as specified in [ARC-0007](/arc-standards/arc-0007#string-specification-signedtxnstr).
* For production wallets, `stxn` **MUST** be checked to match `txn`, as specified in [ARC-0001](/arc-standards/arc-0001#semantic-and-security-requirements).
`signAndPostTxns` **MAY** reject when none of the transactions need to be signed by the user.
## Rationale
[Section titled “Rationale”](#rationale)
In order for a wallet to be useable by a DApp, it must support features for account discovery, signing and posting transactions, and querying the network.
To whatever extent possible, the end users of a DApp should be empowered to select their own wallet, accounts, and network to be used with the DApp. Furthermore, said users should be able to use their preferred network node connection, without exposing their connection details and secrets (such as endpoint URLs and API tokens) to the DApp.
The APIs presented in this document and related documents are sufficient to cover the needed functionality, while protecting user choice and remaining compatible with best security practices. Most DApps indeed always need to post transactions immediately after signing. `signAndPostTxns` allows this goal without revealing the signed transactions to the DApp, which prevents surprises to the user: there is no risk the DApp keeps in memory the transactions and post it later without the user knowing it (either to achieve a malicious goal such as forcing double spending, or just because the DApp has a bug). However, there are cases where `signTxns` and `postTxns` need to be used: for example when multiple users need to coordinate to sign an atomic transfer.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
```js
async function main(wallet) {
// Account discovery
const enabled = await wallet.enable({genesisID: 'testnet-v1.0'});
const from = enabled.accounts[0];
// Querying
const algodv2 = new algosdk.Algodv2(await wallet.getAlgodv2Client());
const suggestedParams = await algodv2.getTransactionParams().do();
const txns = makeTxns(from, suggestedParams);
// Sign and post
const res = await wallet.signAndPost(txns);
console.log(res);
};
```
Where `makeTxns` is comparable to what is seen in [ARC-0001](/arc-standards/arc-0001#reference-implementation)’s sample code.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Reach Browser Spec
> Convention for DApps to discover Algorand wallets in browser
## Abstract
[Section titled “Abstract”](#abstract)
A common convention for DApps to discover Algorand wallets in browser code: `window.algorand`. A property `algorand` attached to the `window` browser object, with all the features defined in [ARC-0010](/arc-standards/arc-0010#specification).
## Specification
[Section titled “Specification”](#specification)
```ts
interface WindowAlgorand {
enable: EnableFunction;
enableNetwork?: EnableNetworkFunction;
enableAccounts?: EnableAccountsFunction;
signAndPostTxns: SignAndPostTxnsFunction;
getAlgodv2Client: GetAlgodv2ClientFunction;
getIndexerClient: GetIndexerClientFunction;
signTxns?: SignTxnsFunction;
postTxns?: SignTxnsFunction;
}
```
With the specifications and semantics for each function as stated in [ARC-0010](/arc-standards/arc-0010#specification).
## Rationale
[Section titled “Rationale”](#rationale)
DApps should be unopinionated about which wallet they are used with. End users should be able to inject their wallet of choice into the DApp. Therefore, in browser contexts, we reserve `window.algorand` for this purpose.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Claimable ASA from vault application
> A smart signature contract account that can receive & disburse claimable Algorand Smart Assets (ASA) to an intended recipient account.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this standard is to establish a standard in the Algorand ecosytem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA.
A on-chain application, called a vault, will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will use box storage to keep track of the vault for any given Algorand account.
If integrated into ecosystem technologies including wallets, epxlorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received. This also enables the ability to “burn” ASAs by sending them to the vault associated with the global Zero Address.
## Motivation
[Section titled “Motivation”](#motivation)
Algorand requires accounts to opt in to receive any ASA, a fact which simultaneously:
1. Grants account holders fine-grained control over their holdings by allowing them to select which assets to allow and preventing receipt of unwanted tokens.
2. Frustrates users and developers when accounting for this requirement especially since other blockchains do not have this requirement.
This ARC lays out a new way to navigate the ASA opt in requirement.
### Contemplated Use Cases
[Section titled “Contemplated Use Cases”](#contemplated-use-cases)
The following use cases help explain how this capability can enhance the possibilities within the Algorand ecosystem.
#### Airdrops
[Section titled “Airdrops”](#airdrops)
An ASA creator who wants to send their asset to a set of accounts faces the challenge of needing their intended receivers to opt in to the ASA ahead of time, which requires non-trivial communication efforts and precludes the possibility of completing the airdrop as a surprise. This claimable ASA standard creates the ability to send an airdrop out to individual addresses so that the receivers can opt in and claim the asset at their convenience—or not, if they so choose.
#### Reducing New User On-boarding Friction
[Section titled “Reducing New User On-boarding Friction”](#reducing-new-user-on-boarding-friction)
An application operator who wants to on-board users to their game or business may want to reduce the friction of getting people started by decoupling their application on-boarding process from the process of funding a non-custodial Algorand wallet, if users are wholly new to the Algorand ecosystem. As long as the receiver’s address is known, an ASA can be sent to them ahead of them having ALGOs in their wallet to cover the minimum balance requirement and opt in to the asset.
#### Token Burning
[Section titled “Token Burning”](#token-burning)
Similarly to any regular account, the global Zero Address also has a corresponding vault to which one can send a quantity of any ASA to effectively “burn” it, rendering it lost forever. No one controls the Zero Address, so while it cannot opt into any ASA to receive it directly, it also cannot make any claims from its corresponding vault, which thus functions as an UN-claimable ASAs purgatory account. By utilizing this approach, anyone can verifiably and irreversibly take a quantity of any ASA out of circulation forever.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Definitions
[Section titled “Definitions”](#definitions)
* **Claimable ASA**: An Algorand Standard Asset (ASA) which has been transferred to a vault following the standard set forth in this proposal such that only the intended receiver account can claim it at their convenience.
* **Vaultt**: An Algorand application used to hold claimable ASAs for a given account.
* **Master**: An Algorand application used to keep track of all of the vaults created for Algorand accounts.
* **dApp**: A decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with applications on the blockchain.
* **Explorer**: An off-chain application that allows browsing the blockchain, showing details of transactions.
* **Wallet**: An off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts.
* **Mainnet ID**: The ID for the application that should be called upon claiming an asset on mainnet
* **Testnet ID**: The ID for the application that should be called upoin claiming an asset on testnet
* **Minimum Balance Requirement (MBR)**: The minimum amount of Algos which must be held by an account on the ledger, which is currently 0.1A + 0.1A per ASA opted into.
### TEAL Smart Contracts
[Section titled “TEAL Smart Contracts”](#teal-smart-contracts)
There are two smart contracts being utilized: The [vault](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0012/vault.teal) and the [master](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0012/master.teal).
#### Vault
[Section titled “Vault”](#vault)
##### Storage
[Section titled “Storage”](#storage)
| Type | Key | Value | Description |
| ------ | ---------- | -------------- | ----------------------------------------------------- |
| Global | “creator” | Account | The account that funded the creation of the vault |
| Global | “master” | Application ID | The application ID that created the vault |
| Global | “receiver” | Account | The account that can claim/reject ASAs from the vault |
| Box | Asset ID | Account | The account that funded the MBR for the given ASA |
##### Methods
[Section titled “Methods”](#methods)
###### Opt-In
[Section titled “Opt-In”](#opt-in)
* Opts vault into ASA
* Creates box: ASA -> “funder”
* “funder” being the account that initiates the opt-in
* “funder” is the one covering the ASA MBR
###### Claim
[Section titled “Claim”](#claim)
* Transfers ASA from vault to “receiver”
* Deletes box: ASA -> “funder”
* Returns ASA and box MBR to “funder”
###### Reject
[Section titled “Reject”](#reject)
* Sends ASA to ASA creator
* Refunds rejector all fees incurred (thus rejecting is free)
* Deletes box: ASA -> “funder”
* Remaining balance sent to fee sink
#### Master
[Section titled “Master”](#master)
##### Storage
[Section titled “Storage”](#storage-1)
| Type | Key | Value | Description |
| ---- | ------- | -------------- | ------------------------------- |
| Box | Account | Application ID | The vault for the given account |
##### Methods
[Section titled “Methods”](#methods-1)
###### Create Vault
[Section titled “Create Vault”](#create-vault)
* Creates a vault for a given account (“receiver”)
* Creates box: “receiver” -> vault ID
* App/box MBR funded by vault creator
###### Delete Vault
[Section titled “Delete Vault”](#delete-vault)
* Deletes vault app
* Deletes box: “receiver” -> vault ID
* App.box MBR returned to vault creator
###### Verify Axfer
[Section titled “Verify Axfer”](#verify-axfer)
* Verifies asset is going to correct vault for “receiver”
###### getVaultID
[Section titled “getVaultID”](#getvaultid)
* Returns vault ID for “receiver”
* Fails if “receiver” does not have vault
###### getVaultAddr
[Section titled “getVaultAddr”](#getvaultaddr)
* Returns vault address for “receiver”
* Fails if “receiver” does not have vault
###### hasVault
[Section titled “hasVault”](#hasvault)
* Determines if “receiver” has a vault
## Rationale
[Section titled “Rationale”](#rationale)
This design was created to offer a standard mechanism by which wallets, explorers, and dapps could enable users to send, receive, and find claimable ASAs without requiring any changes to the core protocol.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
This ARC makes no changes to the consensus protocol and creates no backwards compatibility issues.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### Source code
[Section titled “Source code”](#source-code)
* [Contracts](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0012/contracts)
* [TypeScript SDK](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0012/arc12-sdk)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Both applications (The vault and the master have not been audited)
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Encrypted Short Messages
> Scheme for encryption/decryption that allows for private messages.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this convention is to have a standard way for block explorers, wallets, exchanges, marketplaces, and more generally, client software to send, read & delete short encrypted messages.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Account’s message Application
[Section titled “Account’s message Application”](#accounts-message-application)
To receive a message, an Account **MUST** create an application that follows this convention:
* A Local State named `public_key` **MUST** contain an *NACL Public Key (Curve 25519)* key
* A Local State named `arc` **MUST** contain the value `arc15-nacl-curve25519`
* A Box `inbox` where:
* Keys is an ABI encoded of the tuple `(address,uint64)` containing the address of the sender and the round when the message is sent
* Value is an encoded **text**
> With this design, for each round, the sender can only write one message per round. For the same round, an account can receive multiple messages if distinct sender sends them
### ABI Interface
[Section titled “ABI Interface”](#abi-interface)
The associated smart contract **MUST** implement the following ABI interface:
```json
{
"name": "ARC_0015",
"desc": "Interface for an encrypted messages application",
"methods": [
{
"name": "write",
"desc": "Write encrypted text to the box inbox",
"args": [
{ "type": "byte[]", "name": "text", "desc": "Encrypted text provided by the sender." }
],
"returns": { "type": "void" }
},
{
"name": "authorize",
"desc": "Authorize an addresses to send a message",
"args": [
{ "type": "byte[]", "name": "address_to_add", "desc": "Address of a sender" },
{ "type": "byte[]", "name": "info", "desc": "information about the sender" }
],
"returns": { "type": "void" }
},
{
"name": "remove",
"desc": "Delete the encrypted text sent by an account on a particular round. Send the MBR used for a box to the Application's owner.",
"args": [
{ "type": "byte[]", "name": "address", "desc": "Address of the sender"},
{ "type": "uint64", "name": "round", "desc": "Round when the message was sent"}
],
"returns": { "type": "void" }
},
{
"name": "set_public_key",
"desc": "Register a NACL Public Key (Curve 25519) to the global value public_key",
"args": [
{ "type": "byte[]", "name": "public_key", "desc": "NACL Public Key (Curve 25519)" }
],
"returns": { "type": "void" }
}
]
}
```
> Warning: The remove method only removes the box used for a message, but it is still possible to access it by looking at the indexer.
## Rationale
[Section titled “Rationale”](#rationale)
Algorand blockchain unlocks many new use cases - anonymous user login to dApps and classical WEB2.0 solutions being one of them. For many use-cases, anonymous users still require asynchronous event notifications, and email seems to be the only standard option at the time of the creation of this ARC. With wallet adoption of this standard, users will enjoy real-time encrypted A2P (application-to-person) notifications without having to provide their email addresses and without any vendor lock-in.
There is also a possibility to do a similar version of this ARC with one App which will store every message for every Account.
Another approach was to use the note field for messages, but with box storage available, it was a more practical and secure design.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
The following codes are not audited and are only here for information purposes. It **MUST** not be used in production.
Here is an example of how the code can be run in python : [main.py](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0015/main.py).
> The delete method is only for test purposes, it is not part of the ABI for an `ARC-15` Application.
An example the application created using Beaker can be found here : [application.py](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0015/application.py).
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Even if the message is encrypted, it will stay on the blockchain. If the secret key used to decrypt is compromised at one point, every related message IS at risk.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Convention for declaring traits of an NFT's
> This is a convention for declaring traits in an NFT's metadata.
## Abstract
[Section titled “Abstract”](#abstract)
The goal is to establish a standard for how traits are declared inside a non-fungible NFT’s metadata, for example as specified in ([ARC-3](/arc-standards/arc-0003)), ([ARC-69](/arc-standards/arc-0069)) or ([ARC-72](/arc-standards/arc-0072)).
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
If the property `traits` is provided anywhere in the metadata, it **MUST** adhere to the schema below. If the NFT is a part of a larger collection and that collection has traits, all the available traits for the collection **MUST** be listed as a property of the `traits` object. If the NFT does not have a particular trait, it’s value **MUST** be “none”.
The JSON schema for `traits` is as follows:
```json
{
"title": "Traits for Non-Fungible Token",
"type": "object",
"properties": {
"traits": {
"type": "object",
"description": "Traits (attributes) that can be used to calculate things like rarity. Values may be strings or numbers"
}
}
}
```
#### Examples
[Section titled “Examples”](#examples)
##### Example of an NFT that has traits
[Section titled “Example of an NFT that has traits”](#example-of-an-nft-that-has-traits)
```json
{
"name": "NFT With Traits",
"description": "NFT with traits",
"image": "https://s3.amazonaws.com/your-bucket/images/two.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"properties": {
"creator": "Tim Smith",
"created_at": "January 2, 2022",
"traits": {
"background": "red",
"shirt_color": "blue",
"glasses": "none",
"tattoos": 4,
}
}
}
```
##### Example of an NFT that has no traits
[Section titled “Example of an NFT that has no traits”](#example-of-an-nft-that-has-no-traits)
```json
{
"name": "NFT Without Traits",
"description": "NFT without traits",
"image": "https://s3.amazonaws.com/your-bucket/images/one.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"properties": {
"creator": "John Smith",
"created_at": "January 1, 2022",
}
}
```
## Rationale
[Section titled “Rationale”](#rationale)
A standard for traits is needed so programs know what to expect in order to calculate things like rarity.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
If the metadata does not have the field `traits`, each value of `properties` should be considered a trait.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Royalty Enforcement Specification
> An ARC to specify the methods and mechanisms to enforce Royalty payments as part of ASA transfers
## Abstract
[Section titled “Abstract”](#abstract)
A specification to describe a set of methods that offer an API to enforce Royalty Payments to a Royalty Receiver given a policy describing the royalty shares, both on primary and secondary sales.
This is an implementation of an [ARC-20](/arc-standards/arc-0020) specification and other methods may be implemented in the same contract according to that specification.
## Motivation
[Section titled “Motivation”](#motivation)
This ARC is defined to provide a consistent set of asset configurations and ABI methods that, together, enable a royalty payment to a Royalty Receiver. An example may include some music rights where the label, the artist, and any investors have some assigned royalty percentage that should be enforced on transfer. During the sale transaction, the appropriate royalty payments should be included or the transaction must be rejected.
## Specification
[Section titled “Specification”](#specification)
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC 822](https://www.ietf.org/rfc/rfc822.txt).. [Royalty Policy](#royalty-policy) - The name for the settings that define how royalty payments are collected. [Royalty Enforcer](#royalty-enforcer) - The application that enforces the royalty payments given the Royalty Policy and performs transfers of the assets. [Royalty Enforcer Administrator](#royalty-enforcer-administrator) - The account that may call administrative level methods against the Royalty Enforcer. [Royalty Receiver](#royalty-receiver) - The account that receives the royalty payment. It can be any valid Algorand account. [Royalty Basis](#royalty-basis) - The share of a payment that is due to the Royalty Receiver [Royalty Asset](#royalty-asset) - The ASA that should have royalties enforced during a transfer. [Asset Offer](#asset-offer) - A data structure stored in local state for the current owner representing the number of units of the asset being offered and the authorizing account for any transfer requests. [Third Party Marketplace](#third-party-marketplace) - A third party marketplace may be any marketplace that implements the appropriate methods to initiate transfers.
### Royalty Policy
[Section titled “Royalty Policy”](#royalty-policy)
```ts
interface RoyaltyPolicy {
royalty_basis: number // The percentage of the payment due, specified in basis points (0-10,000)
royalty_recipient: string // The address that should collect the payment
}
```
A Royalty Share consists of a `royalty_receiver` that should receive a Royalty payment and a `royalty_basis` representing some share of the total payment amount.
### Royalty Enforcer
[Section titled “Royalty Enforcer”](#royalty-enforcer)
The Royalty Enforcer is an instance of the contract, an Application, that controls the transfer of ASAs subject to the Royalty Policy. This is accomplished by exposing an interface defined as a set of [ABI Methods](#abi-methods) allowing a grouped transaction call containing a payment and a [Transfer](#transfer) request.
### Royalty Enforcer Administrator
[Section titled “Royalty Enforcer Administrator”](#royalty-enforcer-administrator)
The Royalty Enforcer Administrator is the account that has privileges to call administrative actions against the Royalty Enforcer. If one is not set the account that created the application MUST be used. To update the Royalty Enforcer Administrator the [Set Administrator](#set-administrator) method is called by the current administrator and passed the address of the new administrator. An implementation of this spec may choose how they wish to enforce a that method is called by the administrator.
### Royalty Receiver
[Section titled “Royalty Receiver”](#royalty-receiver)
The Royalty Receiver is a generic account that could be set to a Single Signature, a Multi Signature, a Smart Signature or even to another Smart Contract. The Royalty Receiver is then responsible for any further royalty distribution logic, making the Royalty Enforcement Specification more general and composable.
### Royalty Basis
[Section titled “Royalty Basis”](#royalty-basis)
The Royalty Basis is value representing the percentage of the payment made during a transfer that is due to the Royalty Receiver. The Royalty Basis **MUST** be specified in terms of basis points of the payment amount.
### Royalty Asset
[Section titled “Royalty Asset”](#royalty-asset)
The Royalty Asset is an ASA subject to royalty payment collection and **MUST** be created with the [appropriate parameters](#royalty-asset-parameters).
> Because the protocol does not allow updating an address parameter after it’s been deleted, if the asset creator thinks they may want to modify them later, they must be set to some non-zero address.
#### Asset Offer
[Section titled “Asset Offer”](#asset-offer)
The Asset Offer is the a data structure stored in the owner’s local state. It is keyed in local storage by the byte string representing the ASA Id.
```ts
interface AssetOffer {
auth_address: string // The address of a marketplace or account that may issue a transfer request
offered_amount: number // The number of units being offered
}
```
This concept is important to this specification because we use the clawback feature to transfer the assets. Without some signal that the current owner is willing to have their assets transferred, it may be possible to transfer the asset without their permission. In order for a transfer to occur, this field **MUST** be set and the parameters of the transfer request **MUST** match the value set.
> A transfer matching the offer would require the transfer amount <= offered amount and that the transfer is sent by auth\_address. After the transfer is completed this value **MUST** be wiped from the local state of the owner’s account.
#### Royalty Asset Parameters
[Section titled “Royalty Asset Parameters”](#royalty-asset-parameters)
The Clawback parameter **MUST** be set to the Application Address of the Royalty Enforcer.
> Since the Royalty Enforcer relies on using the Clawback mechanism to perform the transfer the Clawback should NEVER be set to the zero address. The Freeze parameter **MUST** be set to the Application Address of the Royalty Enforcer if `FreezeAddr != ZeroAddress`, else set to `ZeroAddress`. If the asset creator wants to allow an ASA to be Royalty Free after some conditions are met, it should be set to the Application Address The Manager parameter **MUST** be set to the Application Address of the Royalty Enforcer if `ManagerAddr != ZeroAddress`, else set to `ZeroAddress`. If the asset creator wants to update the Freeze parameter, this should be set to the application address The Reserve parameter **MAY** be set to anything. The `DefaultFrozen` **MUST** be set to true.
### Third Party Marketplace
[Section titled “Third Party Marketplace”](#third-party-marketplace)
In order to support secondary sales on external markets this spec was designed such that the Royalty Asset may be listed without transferring it from the current owner’s account. A Marketplace may call the transfer request as long as the address initiating the transfer has been set as the `auth_address` through the [offer](#offer) method in some previous transaction by the current owner.
### ABI Methods
[Section titled “ABI Methods”](#abi-methods)
The following is a set of methods that conform to the [ABI](/arc-standards/arc-0004) specification meant to enable the configuration of a Royalty Policy and perform transfers. Any Inner Transactions that may be performed as part of the execution of the Royalty Enforcer application **SHOULD** set the fee to 0 and enforce fee payment through fee pooling by the caller.
#### Set Administrator:
[Section titled “Set Administrator:”](#set-administrator)
*OPTIONAL*
```plaintext
set_administrator(
administrator: address,
)
```
Sets the administrator for the Royalty Enforcer contract. If this method is never called the creator of the application **MUST** be considered the administrator. This method **SHOULD** have checks to ensure it is being called by the current administrator. The `administrator` parameter is the address of the account that should be set as the new administrator for this Royalty Enforcer application.
#### Set Policy:
[Section titled “Set Policy:”](#set-policy)
*REQUIRED*
```plaintext
set_policy(
royalty_basis: uint64,
royalty_recipient: account,
)
```
Sets the policy for any assets using this application as a Royalty Enforcer. The `royalty_basis` is the percentage for royalty payment collection, specified in basis points (e.g., 1% is 100). A Royalty Basis **SHOULD** be immutable, if an application call is made that would overwrite an existing value it **SHOULD** fail. See [Security Considerations](#security-considerations) for more details on how to handle this parameters mutability. The `royalty_receiver` is the address of the account that should receive a partial share of the payment for any [transfer](#transfer) of an asset subject to royalty collection.
#### Set Payment Asset:
[Section titled “Set Payment Asset:”](#set-payment-asset)
*REQUIRED*
```plaintext
set_payment_asset(
payment_asset: asset,
allowed: boolean,
)
```
The `payment_asset` argument represents the ASA id that is acceptable for payment. The contract logic **MUST** opt into the asset specified in order to accept them as payment as part of a transfer. This method **SHOULD** have checks to ensure it is being called by the current administrator. The `allowed` argument is a boolean representing whether or not this asset is allowed. The Royalty Receiver **MUST** be opted into the full set of assets contained in this list of payment\_assets.
> In the case that an account is not opted into an asset, any transfers where payment is specified for that asset will fail until the account opts into the asset. or the policy is updated.
#### Transfer:
[Section titled “Transfer:”](#transfer)
*REQUIRED*
```plaintext
transfer_algo_payment(
royalty_asset: asset,
royalty_asset_amount: uint64,
from: account,
to: account,
royalty_receiver: account,
payment: pay,
current_offer_amount: uint64,
)
```
And
```plaintext
transfer_asset_payment(
royalty_asset: asset,
royalty_asset_amount: uint64,
from: account,
to: account,
royalty_receiver: account,
payment: axfer,
payment_asset: asset,
current_offer_amount: uint64,
)
```
Transfers the Asset after checking that the royalty policy is adhered to. This call must be sent by the `auth_address` specified by the current offer. There **MUST** be a royalty policy defined prior to attempting a transfer. There are two different method signatures specified, one for simple Algo payments and one for Asset as payment. The appropriate method should be called depending on the circumstance. The `royalty_asset` is the ASA ID to be transferred. The `from` parameter is the account the ASA is transferred from. The `to` parameter is the account the ASA is transferred to. The `royalty_receiver` parameter is the account that collects the royalty payment. The `royalty_asset_amount` parameter is the number of units of this ASA ID to transfer. The amount **MUST** be less than or equal to the amount [offered](#offer) by the `from` account. The `payment` parameter is a reference to the transaction that is transferring some asset (ASA or Algos) from the buyer to the Application Address of the Royalty Enforcer. The `payment_asset` parameter is specified in the case that the payment is being made with some ASA rather than with Algos. It **MUST** match the Asset ID of the AssetTransfer payment transaction. The `current_offer_amount` parameter is the current amount of the Royalty Asset [offered](#offer) by the `from` account. The transfer call **SHOULD** be part of a group with a size of 2 (payment/asset transfer + app call)
> See [Security Considerations](#security-considerations) for details on how this check may be circumvented. Prior to each transfer the Royalty Enforcer **SHOULD** assert that the Seller (the `from` parameter) and the Buyer (the `to` parameter) have blank or unset `AuthAddr`. This reasoning for this check is described in [Security Considerations](#security-considerations). It is purposely left to the implementor to decide if it should be checked.
#### Offer:
[Section titled “Offer:”](#offer)
*REQUIRED*
```plaintext
offer(
royalty_asset: asset,
royalty_asset_amount: uint64,
auth_address: account,
offered_amount: uint64,
offered_auth_addr: account,
)
```
Flags the asset as transferrable and sets the address that may initiate the transfer request. The `royalty_asset` is the ASA ID that is being offered. The `royalty_asset_amount` is the number of units of the ASA ID that are offered. The account making this call **MUST** have at least this amount. The `auth_address` is the address that may initiate a [transfer](#transfer).
> This address may be any valid address in the Algorand network including an Application Account’s address. The `offered_amount` is the number of units of the ASA ID that are currently offered. In the case that this is an update, it should be the amount being replaced. In the case that this is a new offer it should be 0. The `offered_auth_address` is the address that may currently initiate a [transfer](#transfer). In the case that this is an update, it should be the address being replaced. In the case that this is a new offer it should be the zero address. If any transfer is initiated by an address that is *not* listed as the `auth_address` for this asset ID from this account, the transfer **MUST** be rejected. If this method is called when there is an existing entry for the same `royalty_asset`, the call is treated as an update. In the case of an update case the contract **MUST** compare the `offered_amount` and `offered_auth_addr` with what is currently set. If the values differ, the call **MUST** be rejected. This requirement is meant to prevent a sort of race condition where the `auth_address` has a `transfer` accepted before the `offer`-ing account sees the update. In that case the offering account might try to offer more than they would otherwise want to. An example is offered in [security considerations](#security-considerations) To rescind an offer, this method is called with 0 as the new offered amount. If a [transfer](#transfer) or [royalty\_free\_move](#royalty-free-move) is called successfully, the `offer` **SHOULD** be updated or deleted from local state. Exactly how to update the offer is left to the implementer. In the case of a partially filled offer, the amount may be updated to reflect some new amount that represents `offered_amount - amount transferred` or the offer may be deleted completely.
#### Royalty Free Move:
[Section titled “Royalty Free Move:”](#royalty-free-move)
*OPTIONAL*
```plaintext
royalty_free_move(
royalty_asset: asset,
royalty_asset_amount: uint64,
from: account,
to: account,
offered_amount: uint64,
)
```
Moves an asset to the new address without collecting any royalty payment. Prior to this method being called the current owner **MUST** offer their asset to be moved. The `auth_address` of the offer **SHOULD** be set to the address of the Royalty Enforcer Administrator and calling this method **SHOULD** have checks to ensure it is being called by the current administrator.
> This May be useful in the case of a marketplace where the NFT must be placed in some escrow account. Any logic may be used to validate this is an authorized transfer. The `royalty_asset` is the asset being transferred without applying the Royalty Enforcement logic. The `royalty_asset_amount` is the number of units of this ASA ID that should be moved. The `from` parameter is the current owner of the asset. The `to` parameter is the intended receiver of the asset. The `offered_amount` is the number of units of this asset currently offered. This value **MUST** be greater than or equal to the amount being transferred. The `offered_amount` value for is passed to prevent the race or attack described in [Security Considerations](#security-considerations).
### Read Only Methods
[Section titled “Read Only Methods”](#read-only-methods)
Three methods are specified here as `read-only` as defined in [ARC-22](/arc-standards/arc-0022).
#### Get Policy:
[Section titled “Get Policy:”](#get-policy)
*REQUIRED*
```plaintext
get_policy()(address,uint64)
```
Gets the current [Royalty Policy](#royalty-policy) setting for this Royalty Enforcer. The return value is a tuple of type `(address,uint64)`, where the `address` is the [Royalty Receiver](#royalty-receiver) and the `uint64` is the [Royalty Basis](#royalty-basis).
#### Get Offer:
[Section titled “Get Offer:”](#get-offer)
*REQUIRED*
```plaintext
get_offer(
royalty_asset: asset,
from: account,
)(address,uint64)
```
Gets the current [Asset Offer](#asset-offer) for a given asset as set by its owner. The `royalty_asset` parameter is the asset id of the [Royalty Asset](#royalty-asset) that has been offered The `from` parameter is the account that placed the offer The return value is a tuple of type `(address,uint64)`, where `address` is the authorizing address that may make a transfer request and the `uint64` it the amount offered.
#### Get Administrator:
[Section titled “Get Administrator:”](#get-administrator)
*OPTIONAL* unless set\_administrator is implemented then *REQUIRED*
```plaintext
get_administrator()address
```
Gets the [Royalty Enforcer Administrator](#royalty-enforcer-administrator) set for this Royalty Enforcer. The return value is of type `address` and represents the address of the account that may call administrative methods for this Royalty Enforcer application
### Storage
[Section titled “Storage”](#storage)
While the details of storage are described here, `readonly` methods are specified to provide callers with a method to retrieve the information without having to write parsing logic. The exact location and encoding of these fields are left to the implementer.
#### Global Storage
[Section titled “Global Storage”](#global-storage)
The parameters that describe a policy are stored in Global State. The relevant keys are: `royalty_basis` - The percentage specified in basis points of the payment `royalty_receiver` - The account that should be paid the royalty Another key is used to store the current administrator account: `administrator` - The account that is allowed to make administrative calls to this Royalty Enforcer application
#### Local Storage
[Section titled “Local Storage”](#local-storage)
For an offered Asset, the authorizing address and amount offered should be stored in a Local State field for the account offering the Asset.
### Full ABI Spec
[Section titled “Full ABI Spec”](#full-abi-spec)
```json
{
"name": "ARC18",
"methods": [
{
"name": "set_policy",
"args": [
{
"type": "uint64",
"name": "royalty_basis"
},
{
"type": "address",
"name": "royalty_receiver"
}
],
"returns": {
"type": "void"
},
"desc": "Sets the royalty basis and royalty receiver for this royalty enforcer"
},
{
"name": "set_administrator",
"args": [
{
"type": "address",
"name": "new_admin"
}
],
"returns": {
"type": "void"
},
"desc": "Sets the administrator for this royalty enforcer"
},
{
"name": "set_payment_asset",
"args": [
{
"type": "asset",
"name": "payment_asset"
},
{
"type": "bool",
"name": "is_allowed"
}
],
"returns": {
"type": "void"
},
"desc": "Triggers the contract account to opt in or out of an asset that may be used for payment of royalties"
},
{
"name": "set_offer",
"args": [
{
"type": "asset",
"name": "royalty_asset"
},
{
"type": "uint64",
"name": "royalty_asset_amount"
},
{
"type": "address",
"name": "auth_address"
},
{
"type": "uint64",
"name": "prev_offer_amt"
},
{
"type": "address",
"name": "prev_offer_auth"
}
],
"returns": {
"type": "void"
},
"desc": "Flags that an asset is offered for sale and sets address authorized to submit the transfer"
},
{
"name": "transfer_asset_payment",
"args": [
{
"type": "asset",
"name": "royalty_asset"
},
{
"type": "uint64",
"name": "royalty_asset_amount"
},
{
"type": "account",
"name": "owner"
},
{
"type": "account",
"name": "buyer"
},
{
"type": "account",
"name": "royalty_receiver"
},
{
"type": "axfer",
"name": "payment_txn"
},
{
"type": "asset",
"name": "payment_asset"
},
{
"type": "uint64",
"name": "offered_amt"
}
],
"returns": {
"type": "void"
},
"desc": "Transfers an Asset from one account to another and enforces royalty payments. This instance of the `transfer` method requires an AssetTransfer transaction and an Asset to be passed corresponding to the Asset id of the transfer transaction."
},
{
"name": "transfer_algo_payment",
"args": [
{
"type": "asset",
"name": "royalty_asset"
},
{
"type": "uint64",
"name": "royalty_asset_amount"
},
{
"type": "account",
"name": "owner"
},
{
"type": "account",
"name": "buyer"
},
{
"type": "account",
"name": "royalty_receiver"
},
{
"type": "pay",
"name": "payment_txn"
},
{
"type": "uint64",
"name": "offered_amt"
}
],
"returns": {
"type": "void"
},
"desc": "Transfers an Asset from one account to another and enforces royalty payments. This instance of the `transfer` method requires a PaymentTransaction for payment in algos"
},
{
"name": "royalty_free_move",
"args": [
{
"type": "asset",
"name": "royalty_asset"
},
{
"type": "uint64",
"name": "royalty_asset_amount"
},
{
"type": "account",
"name": "owner"
},
{
"type": "account",
"name": "receiver"
},
{
"type": "uint64",
"name": "offered_amt"
}
],
"returns": {
"type": "void"
},
"desc": "Moves the asset passed from one account to another"
},
{
"name": "get_offer",
"args": [
{
"type": "uint64",
"name": "royalty_asset"
},
{
"type": "account",
"name": "owner"
}
],
"returns": {
"type": "(address,uint64)"
},
"read-only":true
},
{
"name": "get_policy",
"args": [],
"returns": {
"type": "(address,uint64)"
},
"read-only":true
},
{
"name": "get_administrator",
"args": [],
"returns": {
"type": "address"
},
"read-only":true
}
],
"desc": "ARC18 Contract providing an interface to create and enforce a royalty policy over a given ASA. See https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0018.md for details.",
"networks": {}
}
```
#### Example Flow for a Marketplace
[Section titled “Example Flow for a Marketplace”](#example-flow-for-a-marketplace)
```plaintext
Let Alice be the creator of the Royalty Enforcer and Royalty Asset
Let Alice also be the Royalty Receiver
Let Bob be the Royalty Asset holder
Let Carol be a buyer of a Royalty Asset
```
```mermaid
sequenceDiagram
Alice->>Royalty Enforcer: set_policy with Royalty Basis and Royalty Receiver
Alice->>Royalty Enforcer: set_payment_asset with any asset that should be accepted as payment
par List
Bob->>Royalty Enforcer: offer
Bob->>Marketplace: list
end
Par Buy
Carol->>Marketplace: buy
Marketplace->>Royalty Enforcer: transfer
Bob->>Carol: clawback issued by Royalty Enforcer
Royalty Enforcer->>Alice: royalty payment
end
par Delist
Bob->>Royalty Enforcer: offer 0
Bob->>Marketplace: delist
end
```
### Metadata
[Section titled “Metadata”](#metadata)
The metadata associated to an asset **SHOULD** conform to any ARC that supports an additional field in the `properties` section specifying the specific information relevant for off-chain applications like wallets or Marketplace dApps. The metadata **MUST** be immutable. The fields that should be specified are the `application-id` as described in [ARC-20](/arc-standards/arc-0020) and `rekey-checked` which describes whether or not this application implements the rekey checks during transfers. Example:
```js
//...
"properties":{
//...
"arc-20":{
"application-id":123
},
"arc-18":{
"rekey-checked":true // Defaults to false if not set, see *Rekey to swap* below for reasoning
}
}
//...
```
## Rationale
[Section titled “Rationale”](#rationale)
The motivation behind defining a Royalty Enforcement specification is the need to guarantee a portion of a payment is received by select royalty collector on sale of an asset. Current royalty implementations are either platform specific or are only adhered to when an honest seller complies with it, allowing for the exchange of an asset without necessarily paying the royalties. The use of a smart contract as a clawback address is a guaranteed way to know an asset transfer is only ever made when certain conditions are met, or made in conjunction with additional transactions. The Royalty Enforcer is responsible for the calculations required in dividing up and dispensing the payments to the respective parties. The present specification does not impose any restriction on the Royalty Receiver distribution logic (if any), which could be achieved through a Multi Signature account, a Smart Signature or even through another Smart Contract. On Ethereum the EIP-2981 standard allows for ERC-721 and ERC-1155 interfaces to signal a royalty amount to be paid, however this is not enforced and requires marketplaces to implement and adhere to it.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
Existing ASAs with unset clawback address or unset manager address (in case the clawback address is not the application account of a smart contract that is updatable - which is most likely the case) will be incompatible with this specification.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
There are a number of security considerations that implementers and users should be aware of. *Royalty policy mutability* The immutability of a royalty basis is important to consider since mutability introduces the possibility for a situation where, after an initial sale, the royalty policy is updated from 1% to 100% for example. This would make any further sales have the full payment amount sent to the royalty recipient and the seller would receive nothing. This specification is written with the recommendation that the royalty policy **SHOULD** be immutable. This is not a **MUST** so that an implementation may decrease the royalty basis may decrease over time. Caution should be taken by users and implementers when evaluating how to implement the exact logic. *Spoofed payment* While its possible to enforce the group size limit, it is possible to circumvent the royalty enforcement logic by simply making an Inner Transaction application call with the appropriate parameters and a small payment, then in the same outer group the “real” payment. The counter-party risk remains the same since the inner transaction is atomic with the outers. In addition, it is always possible to circumvent the royalty enforcement logic by using an escrow account in the middle:
* Alice wants to sell asset A to Bob for 1M USDC.
* Alice and Bob creates an escrow ESCROW (smart signature).
* Alice sends A for 1 μAlgo to the ESCROW
* Bob sends 1M USDC to ESCROW.
* Then ESCROW sends 1M USDC to Alice and sends A to Bob for 1 microAlgo. Some ways to prevent a small royalty payment and larger payment in a later transaction of the same group might be by using an `allow` list that is checked against the `auth_addr` of the offer call. The `allow` list would be comprised of known and trusted marketplaces that do not attempt to circumvent the royalty policy. The `allow` list may be implicit as well by transferring a specific asset to the `auth_addr` as frozen and on `offer` a the balance must be > 0 to allow the `auth_addr` to be persisted. The exact logic that should determine *if* a transfer should be allowed is left to the implementer. *Rekey to swap* Rekeying an account can also be seen as circumventing this logic since there is no counter-party risk given that a rekey can be grouped with a payment. We address this by suggesting the `auth_addr` on the buyer and seller accounts are both set to the zero address. *Offer for unintended clawback* Because we use the clawback mechanism to move the asset, we need to be sure that the current owner is actually interested in making the sale. We address this by requiring the [offer](#offer) method is called to set an authorized address OR that the AssetSender is the one making the application call. *Offer double spend* If the [offer](#offer) method did not require the current value be passed, a possible attack or race condition may be taken advantage of.
* There’s an open offer for N.
* The owner decides to lower it to N < M < 0
* I see that; decide to “frontrun” the second tx and first get N, \[here the ledger should apply the change of offer, which overwrites the previous value — now 0 — with M], then I can get another M of the asset. *Mutable asset parameters* If the ASA has it’s manager parameter set, it is possible to change the other address parameters. Namely the clawback and freeze roles could be changed to allow an address that is *not* the Royalty Enforcer’s application address. For that reason the manager **MUST** be set to the zero address or to the Royalty Enforcer’s address. *Compatibility of existing ASAs* In the case of [ARC-69](/arc-standards/arc-0069) and [ARC-19](/arc-standards/arc-0019) ASA’s the manager is the account that may issue `acfg` transactions to update metadata or to change the reserve address. For the purposes of this spec the manager **MUST** be the application address, so the logic to issue appropriate `acfg` transactions should be included in the application logic if there is a need to update them.
> When evaluating whether or not an existing ASA may be compatible with this spec, note that the `clawback` address needs to be set to the application address of the Royalty Enforcer. The `freeze` address and `manager` address may be empty or, if set, must be the application address. If these addresses aren’t set correctly, the royalty enforcer will not be able to issue the transactions required and there may be security considerations. The `reserve` address has no requirements in this spec so [ARC-19](/arc-standards/arc-0019) ASAs should have no issue assuming the rest of the addresses are set correctly.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Templating of NFT ASA URLs for mutability
> Templating mechanism of the URL so that changeable data in an asset can be substituted by a client, providing a mutable URL.
## Abstract
[Section titled “Abstract”](#abstract)
This ARC describes a template substitution for URLs in ASAs, initially for ipfs\:// scheme URLs allowing mutable CID replacement in rendered URLs.
The proposed template-XXX scheme has substitutions like:
```plaintext
template-ipfs://{ipfscid::::}[/...]
```
This will allow modifying the 32-byte ‘Reserve address’ in an ASA to represent a new IPFS content-id hash. Changing of the reserve address via an asset-config transaction will be all that is needed to point an ASA URL to new IPFS content. The client reading this URL, will compose a fully formed IPFS Content-ID based on the version, multicodec, and hash arguments provided in the ipfscid substitution.
## Motivation
[Section titled “Motivation”](#motivation)
While immutability for many NFTs is appropriate (see [ARC-3](/arc-standards/arc-0003) link), there are cases where some type of mutability is desired for NFT metadata and/or digital media. The data being referenced by the pointer should be immutable but the pointer may be updated to provide a kind of mutability. The data being referenced may be of any size.
Algorand ASAs support mutation of several parameters, namely the role address fields (Manager, Clawback, Freeze, and Reserve addresses), unless previously cleared. These are changed via an asset-config transaction from the Manager account. An asset-config transaction may include a note, but it is limited to 1KB and accessing this value requires clients to use an indexer to iterate/retrieve the values.
Of the parameters that are mutable, the Reserve address is somewhat distinct in that it is not used for anything directly as part of the protocol. It is used solely for determining what is in/out of circulation (by subtracting supply from that held by the reserve address). With a (pure) NFT, the Reserve address is irrelevant as it is a 1 of 1 unit. Thus, the Reserve address may be repurposed as a 32-byte ‘bitbucket’.
These 32-bytes can, for example, hold a SHA2-256 hash uniquely referencing the desired content for the ASA (ARC-3-like metadata for example)
Using the reserve address in this way means that what an ASA ‘points to’ for metadata can be changed with a single asset config transaction, changing only the 32-bytes of the reserve address. The new value is accessible via even non-archival nodes with a single call to the `/v2/assets/xxx` REST endpoint.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
This proposal specifies a method to provide mutability for IPFS hosted content-ids. The intention is that FUTURE ARCs could define additional template substitutions, but this is not meant to be a kitchen sink of templates, only to establish a possible baseline of syntax.
An indication that this ARC is in use is defined by an ASA URL’s “scheme” having the prefix “**template-**”.
An Asset conforming this specification **MUST** have:
1. **URL Scheme of “template-ipfs”**
The URL of the asset must be of the form:
```plain
template-ipfs://(...)
```
> The ipfs\:// scheme is already somewhat of a meta scheme in that clients interpret the ipfs scheme as referencing an IPFS CID (version 0/base58 or 1/base32 currently) followed by optional path within certain types of IPFS DAG content (IPLD CAR content for example). The clients take the CID and use to fetch directly from the IPFS network directly via IPFS nodes, or via various IPFS gateways ([https://ipfs.io/ipfs/CID\[/](https://ipfs.io/ipfs/CID%5B/)…], pinata, etc.)).
2. **An “ipfscid” *template* argument in place of the normal CID.**
Where the format of templates are `{:])`
The ipfscid template definitions is based on properties within the IPFS CID spec:
```plaintext
ipfscid::::
```
> The intent is to recompose a complete CID based on the content-hash contained within the 32-byte reserve address, but using the correct multicodec content type, ipfs content-id version, and hash type to match how the asset creator will seed the IPFS content. If a single file is added using the ‘ipfs’ CLI via `ipfs add --cid-version=1 metadata.json` then the resulting content will be encoded using the ‘raw’ multicodec type. If a directory is added containing one or more files, then it will be encoded using the dag-pb multicodec. CAR content will also be dag-pb. Thus based on the method used to post content to IPFS, the ipfscid template should match.
The parameters to the template ipfscid are:
1. IPFS ``, **MUST** a valid IPFS CID version. Client implementation **MUST** support ‘0’ or ‘1’ and **SHOULD** support future version.
2. `` **MUST** be an IPFS multicodec name. Client implementations **MUST** support ‘raw’ or ‘dag-pb’. Other codecs **SHOULD** be supported but are beyond the scope of this proposal.
3. `` **MUST** be ‘reserve’.
> This is to represent the reserve address is used for the 32-byte hash. It is specified here so future iterations of the specification may allow other fields or syntaxes to reference other mutable field types.
4. `` **MUST** be the multihash hash function type (as defined in ). Client implementations **MUST** support ‘sha2-256’ and **SHOULD** support future hash types when introduced by IPFS.
> IPFS may add future versions of the cid spec, and add additional multicodec types or hash types.
Implementations **SHOULD** use IPFS libraries where possible that accept multicodec and hash types as named values and allow a CID to be composed generically.
### Examples
[Section titled “Examples”](#examples)
> This whole section is non-normative.
* ASA URL: `template-ipfs://{ipfscid:0:dag-pb:reserve:sha2-256}/arc3.json`
* ASA URL: `template-ipfs://{ipfscid:1:raw:reserve:sha2-256}`
* ASA URL: `template-ipfs://{ipfscid:1:dag-pb:reserve:sha2-256}/metadata.json`
#### Deployed Testnet Example
[Section titled “Deployed Testnet Example”](#deployed-testnet-example)
An example was pushed to TestNet, converting from an existing ARC-3 MainNet ASA (asset ID 560421434, )
With IPFS URL:
```plaintext
ipfs://QmQZyq4b89RfaUw8GESPd2re4hJqB8bnm4kVHNtyQrHnnK
```
The TestNet ASA was minted with the URL:
```plaintext
template-ipfs://{ipfscid:0:dag-pb:reserve:sha2-256}
```
as the original CID is a V0 / dag-pb CID.
A helpful link to ‘visualize’ CIDs and for this specific id, is
Using the example encoding implementation, results in virtual ‘reserve address’ of
```plaintext
EEQYWGGBHRDAMTEVDPVOSDVX3HJQIG6K6IVNR3RXHYOHV64ZWAEISS4CTI
```
which is the address (with checksum) corresponding to the 32-byte with hexadecimal value:
```plaintext
21218B18C13C46064C951BEAE90EB7D9D3041BCAF22AD8EE373E1C7AFB99B008
```
(Transformation from a 32-byte public key to an address can be found there on the developer website .)
The resulting ASA can be seen on
Using the forked [repo](https://github.com/TxnLab/arc3.xyz), with testnet selected, and the /nft/66753108 url - the browser will display the original content as-is, using only the Reserve address as the source of the content hash.
### Interactions with ARC-3
[Section titled “Interactions with ARC-3”](#interactions-with-arc-3)
This ARC is compatible with [ARC-3](/arc-standards/arc-0003) with the following notable exception: the ASA Metadata Hash (`am`) is no more necessarily a valid hash of the JSON Metadata File pointed by the URL.
As such, clients cannot be strictly compatible to both ARC-3 and [ARC-19](/arc-standards/arc-0019). An ARC-3 and ARC-19 client **SHOULD** ignore validation of the ASA Metadata Hash when the Asset URL is following ARC-19.
ARC-3 clients **SHOULD** clearly indicate to the user when displaying an ARC-19 ASA, as contrary to a strict ARC-3 ASA, the asset may arbitrarily change over time (even after being bought).
ASA that follow both ARC-3 and ARC-19 **MUST NOT** use extra metadata hash (from ARC-3).
## Rationale
[Section titled “Rationale”](#rationale)
See the motivation section above for the general rationale.
### Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
The ‘template-’ prefix of the scheme is intended to break clients reading these ASA URLs outright. Clients interpreting these URLs as-is would likely yield unusual errors. Code checking for an explicit ‘ipfs’ scheme for example will not see this as compatible with any of the default processing and should treat the URL as if it were simply unknown/empty.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### Encoding
[Section titled “Encoding”](#encoding)
#### Go implementation
[Section titled “Go implementation”](#go-implementation)
```go
import (
"github.com/algorand/go-algorand-sdk/types"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
// ...
func ReserveAddressFromCID(cidToEncode cid.Cid) (string, error) {
decodedMultiHash, err := multihash.Decode(cidToEncode.Hash())
if err != nil {
return "", fmt.Errorf("failed to decode ipfs cid: %w", err))
}
return types.EncodeAddress(decodedMultiHash.Digest)
}
// ....
```
### Decoding
[Section titled “Decoding”](#decoding)
#### Go implementation
[Section titled “Go implementation”](#go-implementation-1)
```go
import (
"errors"
"fmt"
"regexp"
"strings"
"github.com/algorand/go-algorand-sdk/types"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multicodec"
"github.com/multiformats/go-multihash"
)
var (
ErrUnknownSpec = errors.New("unsupported template-ipfs spec")
ErrUnsupportedField = errors.New("unsupported ipfscid field, only reserve is currently supported")
ErrUnsupportedCodec = errors.New("unknown multicodec type in ipfscid spec")
ErrUnsupportedHash = errors.New("unknown hash type in ipfscid spec")
ErrInvalidV0 = errors.New("cid v0 must always be dag-pb and sha2-256 codec/hash type")
ErrHashEncoding = errors.New("error encoding new hash")
templateIPFSRegexp = regexp.MustCompile(`template-ipfs://{ipfscid:(?P[01]):(?P[a-z0-9\-]+):(?P[a-z0-9\-]+):(?P[a-z0-9\-]+)}`)
)
func ParseASAUrl(asaUrl string, reserveAddress types.Address) (string, error) {
matches := templateIPFSRegexp.FindStringSubmatch(asaUrl)
if matches == nil {
if strings.HasPrefix(asaUrl, "template-ipfs://") {
return "", ErrUnknownSpec
}
return asaUrl, nil
}
if matches[templateIPFSRegexp.SubexpIndex("field")] != "reserve" {
return "", ErrUnsupportedField
}
var (
codec multicodec.Code
multihashType uint64
hash []byte
err error
cidResult cid.Cid
)
if err = codec.Set(matches[templateIPFSRegexp.SubexpIndex("codec")]); err != nil {
return "", ErrUnsupportedCodec
}
multihashType = multihash.Names[matches[templateIPFSRegexp.SubexpIndex("hash")]]
if multihashType == 0 {
return "", ErrUnsupportedHash
}
hash, err = multihash.Encode(reserveAddress[:], multihashType)
if err != nil {
return "", ErrHashEncoding
}
if matches[templateIPFSRegexp.SubexpIndex("version")] == "0" {
if codec != multicodec.DagPb {
return "", ErrInvalidV0
}
if multihashType != multihash.SHA2_256 {
return "", ErrInvalidV0
}
cidResult = cid.NewCidV0(hash)
} else {
cidResult = cid.NewCidV1(uint64(codec), hash)
}
return fmt.Sprintf("ipfs://%s", strings.ReplaceAll(asaUrl, matches[0], cidResult.String())), nil
}
```
#### Typescript Implementation
[Section titled “Typescript Implementation”](#typescript-implementation)
A modified version of a simple ARC-3 viewer can be found [here](https://github.com/TxnLab/arc3.xyz) specifically the code segment at [nft.ts#L41](https://github.com/TxnLab/arc3.xyz/blob/main/src/lib/nft.ts#L41)
This is a fork of [ar3.xyz](https://github.com/barnjamin/arc3.xyz)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
There should be no specific security issues beyond those of any client accessing any remote content and the risks linked to assets changing (even after the ASA is bought).
The later is handled in the section “Interactions with ARC-3” above.
Regarding the former, URLs within ASAs could point to malicious content, whether that is an http/https link or whether fetched through ipfs protocols or ipfs gateways. As the template changes nothing other than the resulting URL and also defines nothing more than the generation of an IPFS CID hash value, no security concerns derived from this specific proposal are known.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Smart ASA
> An ARC for an ASA controlled by an Algorand Smart Contract
## Abstract
[Section titled “Abstract”](#abstract)
A “Smart ASA” is an Algorand Standard Asset (ASA) controlled by a Smart Contract that exposes methods to create, configure, transfer, freeze, and destroy the asset.
This ARC defines the ABI interface of such a Smart Contract, the required metadata, and suggests a reference implementation.
## Motivation
[Section titled “Motivation”](#motivation)
The Algorand Standard Asset (ASA) is an excellent building block for on-chain applications. It is battle-tested and widely supported by SDKs, wallets, and dApps.
However, the ASA lacks in flexibility and configurability. For instance, once issued, it can’t be re-configured (its unit name, decimals, maximum supply). Also, it is freely transferable (unless frozen). This prevents developers from specifying additional business logic to be checked while transferring it (think of royalties or vesting ).
Enforcing transfer conditions requires freezing the asset and transferring it through a clawback operation — which results in a process that is opaque to users and wallets and a bad experience for the users.
The Smart ASA defined by this ARC extends the ASA to increase its expressiveness and its flexibility. By introducing this as a standard, both developers, users (marketplaces, wallets, dApps, etc.) and SDKs can confidently and consistently recognize Smart ASAs and adjust their flows and user experiences accordingly.
## Specification
[Section titled “Specification”](#specification)
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
The following sections describe:
* The ABI interface for a controlling Smart Contract (the Smart Contract that controls a Smart ASA).
* The metadata required to denote a Smart ASA and define the association between an ASA and its controlling Smart Contract.
### ABI Interface
[Section titled “ABI Interface”](#abi-interface)
The ABI interface specified here draws inspiration from the transaction reference of an Algorand Standard Asset (ASA).
To provide a unified and familiar interface between the Algorand Standard Asset and the Smart ASA, method names and parameters have been adapted to the ABI types but left otherwise unchanged.
#### Asset Creation
[Section titled “Asset Creation”](#asset-creation)
```json
{
"name": "asset_create",
"args": [
{ "type": "uint64", "name": "total" },
{ "type": "uint32", "name": "decimals" },
{ "type": "bool", "name": "default_frozen" },
{ "type": "string", "name": "unit_name" },
{ "type": "string", "name": "name" },
{ "type": "string", "name": "url" },
{ "type": "byte[]", "name": "metadata_hash" },
{ "type": "address", "name": "manager_addr" },
{ "type": "address", "name": "reserve_addr" },
{ "type": "address", "name": "freeze_addr" },
{ "type": "address", "name": "clawback_addr" }
],
"returns": { "type": "uint64" }
}
```
Calling `asset_create` creates a new Smart ASA and returns the identifier of the ASA. The [metadata section](#metadata) describes its required properties.
> Upon a call to `asset_create`, a reference implementation SHOULD:
>
> * Mint an Algorand Standard Asset (ASA) that MUST specify the properties defined in the [metadata section](#metadata). In addition:
>
> * The `manager`, `reserve` and `freeze` addresses SHOULD be set to the account of the controlling Smart Contract.
> * The remaining fields are left to the implementation, which MAY set `total` to `2 ** 64 - 1` to enable dynamically increasing the max circulating supply of the Smart ASA.
> * `name` and `unit_name` MAY be set to `SMART-ASA` and `S-ASA`, to denote that this ASA is Smart and has a controlling application.
>
> * Persist the `total`, `decimals`, `default_frozen`, etc. fields for later use/retrieval.
>
> * Return the ID of the created ASA.
>
> It is RECOMMENDED for calls to this method to be permissioned, e.g. to only approve transactions issued by the controlling Smart Contract creator.
#### Asset Configuration
[Section titled “Asset Configuration”](#asset-configuration)
```json
[
{
"name": "asset_config",
"args": [
{ "type": "asset", "name": "config_asset" },
{ "type": "uint64", "name": "total" },
{ "type": "uint32", "name": "decimals" },
{ "type": "bool", "name": "default_frozen" },
{ "type": "string", "name": "unit_name" },
{ "type": "string", "name": "name" },
{ "type": "string", "name": "url" },
{ "type": "byte[]", "name": "metadata_hash" },
{ "type": "address", "name": "manager_addr" },
{ "type": "address", "name": "reserve_addr" },
{ "type": "address", "name": "freeze_addr" },
{ "type": "address", "name": "clawback_addr" }
],
"returns": { "type": "void" }
},
{
"name": "get_asset_config",
"readonly": true,
"args": [{ "type": "asset", "name": "asset" }],
"returns": {
"type": "(uint64,uint32,bool,string,string,string,byte[],address,address,address,address)",
"desc": "`total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback_addr`"
}
}
]
```
Calling `asset_config` configures an existing Smart ASA.
> Upon a call to `asset_config`, a reference implementation SHOULD:
>
> * Fail if `config_asset` does not correspond to an ASA controlled by this smart contract.
> * Succeed iff the `sender` of the transaction corresponds to the `manager_addr` that was previously persisted for `config_asset` by a previous call to this method or, if never caller, to `asset_create`.
> * Update the persisted `total`, `decimals`, `default_frozen`, etc. fields for later use/retrieval.
>
> The business logic associated with the update of the other parameters is left to the implementation. An implementation that maximizes similarities with ASAs SHOULD NOT allow modifying the `clawback_addr` or `freeze_addr` after they have been set to the special value `ZeroAddress`.
>
> The implementation MAY provide flexibility on the fields of an ASA that cannot be updated after initial configuration. For instance, it MAY update the `total` parameter to enable minting of new units or restricting the maximum supply; when doing so, the implementation SHOULD ensure that the updated `total` is not lower than the current circulating supply of the asset.
Calling `get_asset_config` reads and returns the `asset`’s configuration as specified in:
* The most recent invocation of `asset_config`; or
* if `asset_config` was never invoked for `asset`, the invocation of `asset_create` that originally created it.
> Upon a call to `get_asset_config`, a reference implementation SHOULD:
>
> * Fail if `asset` does not correspond to an ASA controlled by this smart contract (see `asset_config`).
> * Return `total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback` as persisted by `asset_create` or `asset_config`.
#### Asset Transfer
[Section titled “Asset Transfer”](#asset-transfer)
```json
{
"name": "asset_transfer",
"args": [
{ "type": "asset", "name": "xfer_asset" },
{ "type": "uint64", "name": "asset_amount" },
{ "type": "account", "name": "asset_sender" },
{ "type": "account", "name": "asset_receiver" }
],
"returns": { "type": "void" }
}
```
Calling `asset_transfer` transfers a Smart ASA.
> Upon a call to `asset_transfer`, a reference implementation SHOULD:
>
> * Fail if `xfer_asset` does not correspond to an ASA controlled by this smart contract.
>
> * Succeed if:
>
> * the `sender` of the transaction is the `asset_sender` and
> * `xfer_asset` is not in a frozen state (see [Asset Freeze below](#asset-freeze)) and
> * `asset_sender` and `asset_receiver` are not in a frozen state (see [Asset Freeze below](#asset-freeze))
>
> * Succeed if the `sender` of the transaction corresponds to the `clawback_addr`, as persisted by the controlling Smart Contract. This enables clawback operations on the Smart ASA.
>
> Internally, the controlling Smart Contract SHOULD issue a clawback inner transaction that transfers the `asset_amount` from `asset_sender` to `asset_receiver`. The inner transaction will fail on the usual conditions (e.g. not enough balance).
>
> Note that the method interface does not specify `asset_close_to`, because holders of a Smart ASA will need two transactions (RECOMMENDED in an Atomic Transfer) to close their position:
>
> * A call to this method to transfer their outstanding balance (possibly as a `CloseOut` operation if the controlling Smart Contract required opt in); and
> * an additional transaction to close out of the ASA.
#### Asset Freeze
[Section titled “Asset Freeze”](#asset-freeze)
```json
[
{
"name": "asset_freeze",
"args": [
{ "type": "asset", "name": "freeze_asset" },
{ "type": "bool", "name": "asset_frozen" }
],
"returns": { "type": "void" }
},
{
"name": "account_freeze",
"args": [
{ "type": "asset", "name": "freeze_asset" },
{ "type": "account", "name": "freeze_account" },
{ "type": "bool", "name": "asset_frozen" }
],
"returns": { "type": "void" }
}
]
```
Calling `asset_freeze` prevents any transfer of a Smart ASA. Calling `account_freeze` prevents a specific account from transferring or receiving a Smart ASA.
> Upon a call to `asset_freeze` or `account_freeze`, a reference implementation SHOULD:
>
> * Fail if `freeze_asset` does not correspond to an ASA controlled by this smart contract.
> * Succeed iff the `sender` of the transaction corresponds to the `freeze_addr`, as persisted by the controlling Smart Contract.
>
> In addition:
>
> * Upon a call to `asset_freeze`, the controlling Smart Contract SHOULD persist the tuple `(freeze_asset, asset_frozen)` (for instance, by setting a `frozen` flag in *global* storage).
> * Upon a call to `account_freeze` the controlling Smart Contract SHOULD persist the tuple `(freeze_asset, freeze_account, asset_frozen)` (for instance by setting a `frozen` flag in the *local* storage of the `freeze_account`). See the [security considerations section](#security-considerations) for how to ensure that Smart ASA holders cannot reset their `frozen` flag by clearing out their state at the controlling Smart Contract.
```json
[
{
"name": "get_asset_is_frozen",
"readonly": true,
"args": [{ "type": "asset", "name": "freeze_asset" }],
"returns": { "type": "bool" }
},
{
"name": "get_account_is_frozen",
"readonly": true,
"args": [
{ "type": "asset", "name": "freeze_asset" },
{ "type": "account", "name": "freeze_account" }
],
"returns": { "type": "bool" }
}
]
```
The value return by `get_asset_is_frozen` (respectively, `get_account_is_frozen`) tells whether any account (respectively `freeze_account`) can transfer or receive `freeze_asset`. A `false` value indicates that the transfer will be rejected.
> Upon a call to `get_asset_is_frozen`, a reference implementation SHOULD retrieve the tuple `(freeze_asset, asset_frozen)` as stored on `asset_freeze` and return the value corresponding to `asset_frozen`. Upon a call to `get_account_is_frozen`, a reference implementation SHOULD retrieve the tuple `(freeze_asset, freeze_account, asset_frozen)` as stored on `account_freeze` and return the value corresponding to `asset_frozen`.
#### Asset Destroy
[Section titled “Asset Destroy”](#asset-destroy)
```json
{
"name": "asset_destroy",
"args": [{ "type": "asset", "name": "destroy_asset" }],
"returns": { "type": "void" }
}
```
Calling `asset_destroy` destroys a Smart ASA.
> Upon a call to `asset_destroy`, a reference implementation SHOULD:
>
> * Fail if `destroy_asset` does not correspond to an ASA controlled by this smart contract.
>
> It is RECOMMENDED for calls to this method to be permissioned (see `asset_create`).
>
> The controlling Smart Contract SHOULD perform an asset destroy operation on the ASA with ID `destroy_asset`. The operation will fail if the asset is still in circulation.
#### Circulating Supply
[Section titled “Circulating Supply”](#circulating-supply)
```json
{
"name": "get_circulating_supply",
"readonly": true,
"args": [{ "type": "asset", "name": "asset" }],
"returns": { "type": "uint64" }
}
```
Calling `get_circulating_supply` returns the circulating supply of a Smart ASA.
> Upon a call to `get_circulating_supply`, a reference implementation SHOULD:
>
> * Fail if `asset` does not correspond to an ASA controlled by this smart contract.
> * Return the circulating supply of `asset`, defined by the difference between the ASA `total` and the balance held by its `reserve_addr` (see [Asset Creation](#asset-creation)).
#### Full ABI Spec
[Section titled “Full ABI Spec”](#full-abi-spec)
```json
{
"name": "arc-0020",
"methods": [
{
"name": "asset_create",
"args": [
{
"type": "uint64",
"name": "total"
},
{
"type": "uint32",
"name": "decimals"
},
{
"type": "bool",
"name": "default_frozen"
},
{
"type": "string",
"name": "unit_name"
},
{
"type": "string",
"name": "name"
},
{
"type": "string",
"name": "url"
},
{
"type": "byte[]",
"name": "metadata_hash"
},
{
"type": "address",
"name": "manager_addr"
},
{
"type": "address",
"name": "reserve_addr"
},
{
"type": "address",
"name": "freeze_addr"
},
{
"type": "address",
"name": "clawback_addr"
}
],
"returns": {
"type": "uint64"
}
},
{
"name": "asset_config",
"args": [
{
"type": "asset",
"name": "config_asset"
},
{
"type": "uint64",
"name": "total"
},
{
"type": "uint32",
"name": "decimals"
},
{
"type": "bool",
"name": "default_frozen"
},
{
"type": "string",
"name": "unit_name"
},
{
"type": "string",
"name": "name"
},
{
"type": "string",
"name": "url"
},
{
"type": "byte[]",
"name": "metadata_hash"
},
{
"type": "address",
"name": "manager_addr"
},
{
"type": "address",
"name": "reserve_addr"
},
{
"type": "address",
"name": "freeze_addr"
},
{
"type": "address",
"name": "clawback_addr"
}
],
"returns": {
"type": "void"
}
},
{
"name": "get_asset_config",
"readonly": true,
"args": [
{
"type": "asset",
"name": "asset"
}
],
"returns": {
"type": "(uint64,uint32,bool,string,string,string,byte[],address,address,address,address)",
"desc": "`total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback`"
}
},
{
"name": "asset_transfer",
"args": [
{
"type": "asset",
"name": "xfer_asset"
},
{
"type": "uint64",
"name": "asset_amount"
},
{
"type": "account",
"name": "asset_sender"
},
{
"type": "account",
"name": "asset_receiver"
}
],
"returns": {
"type": "void"
}
},
{
"name": "asset_freeze",
"args": [
{
"type": "asset",
"name": "freeze_asset"
},
{
"type": "bool",
"name": "asset_frozen"
}
],
"returns": {
"type": "void"
}
},
{
"name": "account_freeze",
"args": [
{
"type": "asset",
"name": "freeze_asset"
},
{
"type": "account",
"name": "freeze_account"
},
{
"type": "bool",
"name": "asset_frozen"
}
],
"returns": {
"type": "void"
}
},
{
"name": "get_asset_is_frozen",
"readonly": true,
"args": [
{
"type": "asset",
"name": "freeze_asset"
}
],
"returns": {
"type": "bool"
}
},
{
"name": "get_account_is_frozen",
"readonly": true,
"args": [
{
"type": "asset",
"name": "freeze_asset"
},
{
"type": "account",
"name": "freeze_account"
}
],
"returns": {
"type": "bool"
}
},
{
"name": "asset_destroy",
"args": [
{
"type": "asset",
"name": "destroy_asset"
}
],
"returns": {
"type": "void"
}
},
{
"name": "get_circulating_supply",
"readonly": true,
"args": [
{
"type": "asset",
"name": "asset"
}
],
"returns": {
"type": "uint64"
}
}
]
}
```
### Metadata
[Section titled “Metadata”](#metadata)
#### ASA Metadata
[Section titled “ASA Metadata”](#asa-metadata)
The ASA underlying a Smart ASA:
* MUST be `DefaultFrozen`.
* MUST specify the ID of the controlling Smart Contract (see below); and
* MUST set the `ClawbackAddr` to the account of such Smart Contract.
The metadata **MUST** be immutable.
#### Specifying the controlling Smart Contract
[Section titled “Specifying the controlling Smart Contract”](#specifying-the-controlling-smart-contract)
A Smart ASA MUST specify the ID of its controlling Smart Contract.
If the Smart ASA also conforms to any ARC that supports additional `properties` ([ARC-3](/arc-standards/arc-0003), [ARC-69](/arc-standards/arc-0069)), then it MUST include a `arc-20` key and set the corresponding value to a map, including the ID of the controlling Smart Contract as a value for the key `application-id`. For example:
```javascript
{
//...
"properties": {
//...
"arc-20": {
"application-id": 123
}
}
//...
}
```
> To avoid ecosystem fragmentation this ARC does NOT propose any new method to specify the metadata of an ASA. Instead, it only extends already existing standards.
### Handling opt in and close out
[Section titled “Handling opt in and close out”](#handling-opt-in-and-close-out)
A Smart ASA MUST require users to opt to the ASA and MAY require them to opt in to the controlling Smart Contract. This MAY be performed at two separate times.
The reminder of this section is non-normative.
> Smart ASAs SHOULD NOT require users to opt in to the controlling Smart Contract, unless the implementation requires storing information into their local schema (for instance, to implement [freezing](#asset-freeze); also see [security considerations](#security-considerations)).
>
> Clients MAY inspect the local state schema of the controlling Smart Contract to infer whether opt in is required.
>
> If a Smart ASA requires opt in, then clients SHOULD prevent users from closing out the controlling Smart Contract unless they don’t hold a balance for any of the ASAs controlled by the Smart Contract.
## Rationale
[Section titled “Rationale”](#rationale)
This ARC builds on the strengths of the ASA to enable a Smart Contract to control its operations and flexibly re-configure its configuration.
The rationale is to have a “Smart ASA” that is as widely adopted as the ASA both by the community and by the surrounding ecosystem. Wallets, dApps, and marketplaces:
* Will display a user’s Smart ASA balance out-of-the-box (because of the underlying ASA).
* SHOULD recognize Smart ASAs and inform the users accordingly by displaying the name, unit name, URL, etc. from the controlling Smart Contract.
* SHOULD enable users to transfer the Smart ASA by constructing the appropriate transactions, which call the ABI methods of the controlling Smart Contract.
With this in mind, this standard optimizes for:
* Community adoption, by minimizing the [ASA metadata](#metadata) that need to be set and the requirements of a conforming implementation.
* Developer adoption, by re-using the familiar ASA transaction reference in the methods’ specification.
* Ecosystem integration, by minimizing the amount of work that a wallet, dApp or service should perform to support the Smart ASA.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
Existing ASAs MAY adopt this standard if issued or re-configured to match the requirements in the [metadata section](#metadata).
This requires:
* The ASA to be `DefaultFrozen`.
* Deploying a Smart Contract that will manage, control and operate on the asset(s).
* Re-configuring the ASA, by setting its `ClawbackAddr` to the account of the controlling Smart Contract.
* Associating the ID of the Smart Contract to the ASA (see [metadata](#metadata)).
### [ARC-18](/arc-standards/arc-0018)
[Section titled “ARC-18”](#arc-18)
Assets implementing [ARC-18](/arc-standards/arc-0018) MAY also be compatible with this ARC if the Smart Contract implementing royalties enforcement exposes the ABI methods specified here. The corresponding ASA and their metadata are compliant with this standard.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
A reference implementation is available [here](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0020)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Keep in mind that the rules governing a Smart ASA are only in place as long as:
* The ASA remains frozen;
* the `ClawbackAddr` of the ASA is set to a controlling Smart Contract, as specified in the [metadata section](#metadata);
* the controlling Smart Contract is not updatable, nor deletable, nor re-keyable.
### Local State
[Section titled “Local State”](#local-state)
If your controlling Smart Contract implementation writes information to a user’s local state, keep in mind that users can close out the application and (worse) clear their state at all times. This requires careful considerations.
For instance, if you determine a user’s [freeze](#asset-freeze) state by reading a flag from their local state, you should consider the flag *set* and the user *frozen* if the corresponding local state key is *missing*.
For a `default_frozen` Smart ASA this means:
* Set the `frozen` flag (to `1`) at opt in.
* Explicitly verify that a user’s `frozen` flag is not set (is `0`) before approving transfers.
* If the key `frozen` is missing from the user’s local state, then considered the flag to be set and reject all transfers.
This prevents users from resetting their `frozen` flag by clearing their state and then opting into the controlling Smart Contract again.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Round based datafeed oracles on Algorand
> Conventions for building round based datafeed oracles on Algorand
## Abstract
[Section titled “Abstract”](#abstract)
The following document introduces conventions for building round based datafeed oracles on Algorand using the ABI defined in [ARC-4](/arc-standards/arc-0004)
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
An [ARC-21](/arc-standards/arc-0021) oracle **MUST** have an associated smart-contract implementaing the ABI interface described below.
### ABI Interface
[Section titled “ABI Interface”](#abi-interface)
Round based datafeed oracles allow smart-contracts to get data with relevancy to a specific block number, for example the ALGO price at a specific round.
The associated smart contract **MUST** implement the following ABI interface:
```json
{
"name": "ARC_0021",
"desc": "Interface for a round based datafeed oracle",
"methods": [
{
"name": "get",
"desc": "Get data from the oracle for a specific round",
"args": [
{ "type": "uint64", "name": "round", "desc": "The desired round" },
{ "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." }
],
"returns": { "type": "byte[]", "desc": "The oracle's response. If the data doesn't exist, the response is an empty slice." }
},
{
"name": "must_get",
"desc": "Get data from the oracle for a specific round. Panics if the data doesn't exist.",
"args": [
{ "type": "uint64", "name": "round", "desc": "The desired round" },
{ "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." }
],
"returns": { "type": "byte[]", "desc": "The oracle's response" }
},
/** Optional */
{
"name": "get_closest",
"desc": "Get data from the oracle closest to a specified round by searching over past rounds.",
"args": [
{ "type": "uint64", "name": "round", "desc": "The desired round" },
{ "type": "uint64", "name": "search_span", "desc": "Threshold for number of rounds in the past to search on." }
{ "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." }
],
"returns": { "type": "(uint64,byte[])", "desc": "The closest round and the oracle's response for that round. If the data doesn't exist, the round is set to 0 and the response is an empty slice." }
},
/** Optional */
{
"name": "must_get_closest",
"desc": "Get data from the oracle closest to a specified round by searching over past rounds. Panics if no data is found within the specified range.",
"args": [
{ "type": "uint64", "name": "round", "desc": "The desired round" },
{ "type": "uint64", "name": "search_span", "desc": "Threshold for number of rounds in the past to search on." }
{ "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." }
],
"returns": { "type": "(uint64,byte[])", "desc": "The closest round and the oracle's response for that round." }
}
]
}
```
### Method boundaries
[Section titled “Method boundaries”](#method-boundaries)
* All of `get`, `must_get`, `get_closest` and `must_get_closest` functions **MUST NOT** use local state.
* Optional arguments of type `byte[]` that are not used are expected to be passed as an empty byte slice.
## Rationale
[Section titled “Rationale”](#rationale)
The goal of these conventions is to make it easier for smart-contracts to interact with off-chain data sources.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Add `read-only` annotation to ABI methods
> Convention for creating methods which don't mutate state
The following document introduces a convention for creating methods (as described in [ARC-4](/arc-standards/arc-0004)) which don’t mutate state.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this convention is to allow smart contract developers to distinguish between methods which mutate state and methods which don’t by introducing a new property to the `Method` descriptor.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Read-only functions
[Section titled “Read-only functions”](#read-only-functions)
A `read-only` function is a function with no side-effects. In particular, a `read-only` function **SHOULD NOT** include:
* local/global state modifications
* calls to non `read-only` functions
* inner-transactions
It is **RECOMMENDED** for a `read-only` function to not access transactions in a group or metadata of the group.
> The goal is to allow algod to easily execute `read-only` functions without broadcasting a transaction
In order to support this annotation, the following `Method` descriptor is suggested:
```typescript
interface Method {
/** The name of the method */
name: string;
/** Optional, user-friendly description for the method */
desc?: string;
/** Optional, is it a read-only method (according to ARC-22) */
readonly?: boolean
/** The arguments of the method, in order */
args: Array<{
/** The type of the argument */
type: string;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
}>;
/** Information about the method's return value */
returns: {
/** The type of the return value, or "void" to indicate no return value. */
type: string;
/** Optional, user-friendly description for the return value */
desc?: string;
};
}
```
## Rationale
[Section titled “Rationale”](#rationale)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Sharing Application Information
> Append application information to compiled TEAL applications
## Abstract
[Section titled “Abstract”](#abstract)
The following document introduces a convention for appending information (stored in various files) to the compiled application’s bytes. The goal of this convention is to standardize the process of verifying and adding this information. The encoded information byte string is `arc23` followed by the IPFS CID v1 of a folder containing the files with the information.
The minimum required file is `contract.json` representing the contract metadata (as described in [ARC-4](/arc-standards/arc-0004)), and as extended by future potential ARCs).
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Files containing Application Information
[Section titled “Files containing Application Information”](#files-containing-application-information)
Application information are represented by various files in a folder that:
* **MUST** contain a file `contract.json` representing the contract metadata (as described in [ARC-4](/arc-standards/arc-0004)), and as extended by future potential ARCs).
* **MAY** contain a file with the basename `application` followed by the extension of the high-level language the application is written in (e.g., `application.py` for PyTeal).
> To allow the verification of your contract, be sure to write the version used to compile the file after the import eg: `from pyteal import * #pyteal==0.20.1`
* **MAY** contain the files `approval.teal` and `clear.teal`, that are the compiled versions of approval and clear program in TEAL.
* Note that `approval.teal` will not be able to contain the application information as this would create circularity. If `approval.teal` is provided, it is assumed that the *actual* `approval.teal` that is deployed corresponds to `approval.teal` with the proper `bytecblock` (defined below) appended at the end.
* **MAY** contain other files as defined by other ARCs.
### CID, Pinning, and CAR of the Application Information
[Section titled “CID, Pinning, and CAR of the Application Information”](#cid-pinning-and-car-of-the-application-information)
The [CID](https://github.com/multiformats/cid) allows to access the corresponding application information files using [IPFS](https://docs.ipfs.tech/).
The CID **MUST**:
* Represent a folder of files, even if only `contract.json` is present.
> You may need to use the option `--wrap-with-directory` of `ipfs add`
* Be a version V1 CID
> E.g., use the option `--cid-version=1` of `ipfs add`
* Use SHA-256 hash algorithm
> E.g., use the option `--hash=sha2-256` of `ipfs add`
Since the exact CID depends on the options provided when creating it and of the IPFS software version (if default options are used), for any production application, the folder of files **SHOULD** be published and pinned on IPFS.
> All examples in this ARC assume the use of Kubo IPFS version 0.17.0 with default options apart those explicitly stated.
If the IPFS is not pinned, any production application **SHOULD** provide a [Content Address Archiver (CAR)](https://ipld.io/specs/transport/car/carv1)( file of the folder, obtained using `ipfs dag export`.
For public networks (e.g., MainNet, TestNet, BetaNet), block explorers and wallets (that support this ARC) **SHOULD** try to recover application information files from IPFS, and if not possible, **SHOULD** allow developers to upload a CAR file. If a CAR file is used, these tools **MUST** validate the CAR file matches the CID.
For development purposes, on private networks, the application information files **MAY** be instead provided as a .zip or .tar.gz containing at the root all the required files. Block explorers and wallets for *private* networks **MAY** allow uploading the application information as a .zip or .tar.gz. They still **SHOULD** validate the files.
> The validation of .zip or .tar.gz files will work if the same version of the IPFS software is used with the same option. Since for development purposes, the same machine is normally used to code the dApp and run the block explorer/wallet, this is most likely not an issue. However, for production purposes, we cannot assume the same IPFS software is used and a CAR file is the best solution to ensure that the application information files will always be available and possible to validate.
> Example: For the example stored in `/asset/arc-0023/application_information`, the CID is `bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte`, which can be obtained with the command:
>
> ```plaintext
> ipfs add --cid-version=1 --hash=sha2-256 --recursive --quiet --wrap-with-directory --only-hash application_information
> ```
### Associated Encoded Information Byte String
[Section titled “Associated Encoded Information Byte String”](#associated-encoded-information-byte-string)
The (encoded) information byte string is `arc23` concatenated to the 36 bytes of the binary CID.
The information byte string is always 41-byte long and always start, in hexadecimal with `0x6172633233` (corresponding to `arc23`).
> Example: for the above CID `bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte`, the binary CID corresponds to the following hexadecimal value:
>
> ```plaintext
> 0x0170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699
> ```
>
> and hence the encoded information byte string has the following hexadecimal value:
>
> ```plaintext
> 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699
> ```
### Inclusion of the Encoded Information Byte String in Programs
[Section titled “Inclusion of the Encoded Information Byte String in Programs”](#inclusion-of-the-encoded-information-byte-string-in-programs)
The encoded information byte string is included in the *approval program* of the application via a [`bytecblock`](https://developer.algorand.org/docs/get-details/dapps/avm/teal/opcodes/#bytecblock-bytes) with a unique byte string equal to the encoding information byte string.
> For the example above, the `bytecblock` is:
>
> ```plaintext
> bytecblock 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699
> ```
>
> and when compiled this gives the following byte string (at least with TEAL v8 and before):
>
> ```plaintext
> 0x26012961726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699
> ```
The size of the compiled application plus the bytecblock **MUST** be, at most, the maximum size of a compiled application according to the latest consensus parameters supported by the compiler.
> At least with TEAL v8 and before, appending the `bytecblock` to the end of the program should add exactly 44 bytes (1 byte for opcode `bytecblock`, 1 byte for 0x01 -the number of byte strings-, 1 byte for 0x29 the length of the encoded information byte string, 41 byte for the encodedin information byte string)
The `bytecblock` **MAY** be placed anywhere in the TEAL source code as long as it does not modify the semantic of the TEAL source code. However, if `approval.teal` is provided as an application information file, the `bytecblock` **SHOULD** be the last opcode of the deployed TEAL program.
Developers **MUST** check that, when adding the `bytecblock` to their program, semantic is not changed.
> At least with TEAL v8 and before, adding a `bytecblock` opcode at the end of the approval program does not change the semantics of the program, as long as opcodes are correctly aligned, there is no jump after the last position (that would make the program fail without `bytecblock`), and there is enough space left to add the opcode, at least with TEAL v8 and before. However, though very unlikely, future versions of TEAL may not satisfy this property.
The `bytecblock` **MUST NOT** contain any additional byte string beyond the encoded information byte string.
> For example, the following `bytecblock` is **INVALID**:
>
> ```plaintext
> bytecblock 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 0x42
> ```
### Retrieval the Encoded Information Byte String and CID from Compiled TEAL Programs
[Section titled “Retrieval the Encoded Information Byte String and CID from Compiled TEAL Programs”](#retrieval-the-encoded-information-byte-string-and-cid-from-compiled-teal-programs)
For programs until TEAL v8, a way to find the encoded information byte string is to search for the prefix:
```plaintext
0x2601296172633233
```
which is then followed by the 36 bytes of the binary CID.
Indeed, this prefix is composed of:
* 0x26, the `bytecblock` opcode
* 0x01, the number of byte strings provided in the `bytecblock`
* 0x29, the length of the encoded information byte string
* 0x6172633233, the hexadecimal of `arc23`
Software retrieving the encoded information byte string **SHOULD** check the TEAL version and only perform retrieval for supported TEAL version. They also **SHOULD** gracefully handle false positives, that is when the above prefix is found multiple times. One solution is to allow multiple possible CID for a given compiled program.
Note that opcode encoding may change with the TEAL version (though this did not happen up to TEAL v8 at least). If the `bytecblock` opcode encoding changes, software that extract the encoded information byte string from compiled TEAL programs **MUST** be updated.
## Rationale
[Section titled “Rationale”](#rationale)
By appending the IPFS CID of the folder containing information about the Application, any user with access to the blockchain could easily verify the Application and the ABI of the Application and interact with it.
Using IPFS has several advantages:
* Allows automatic retrievel of the application information when pinned.
* Allows easy archival using CAR.
* Allows support of multiple files.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
The following codes are not audited and are only here for information purposes.
Here is an example of a python script that can generate the hash and append it to the compiled application, according this ARC: [main.py](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0023/main.py).
A Folder containing:
* example of the application [application.py](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0023/application_information/application.py).
* example of the contract metadata that follow [ARC-4](/arc-standards/arc-0004) [contract.json](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0023/application_information/contract.json).
Files are accessible through followings IPFS command:
```console
$ ipfs cat bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte/contract.json
$ ipfs cat bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte/application.py
```
> If they are not accessible be sure to removed \[—only-hash | -n] from your command or check you ipfs node.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
CIDs are unique; however, related files **MUST** be checked to ensure that the application conforms. An `arc-23` CID added at the end of an application is here to share information, not proof of anything. In particular, nothing ensures that a provided `approval.teal` matches the actual program on chain.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand WalletConnect v1 API
> API for communication between Dapps and wallets using WalletConnect
This document specifies a standard API for communication between Algorand decentralized applications and wallets using the WalletConnect v1 protocol.
## Abstract
[Section titled “Abstract”](#abstract)
WalletConnect is an open protocol to communicate securely between mobile wallets and decentralized applications (dApps) using QR code scanning (desktop) or deep linking (mobile). It’s main use case allows users to sign transactions on web apps using a mobile wallet.
This document aims to establish a standard API for using the WalletConnect v1 protocol on Algorand, leveraging the existing transaction signing APIs defined in [ARC-1](/arc-standards/arc-0001).
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
It is strongly recommended to read and understand the entirety of [ARC-1](/arc-standards/arc-0001) before reading this ARC.
### Overview
[Section titled “Overview”](#overview)
This overview section is non-normative. It offers a brief overview of the WalletConnect v1 lifecycle. A more in-depth description can be found in the WalletConnect v1 documentation .
In order for a dApp and wallet to communicate using WalletConnect, a WalletConnect session must be established between them. The dApp is responsible for initiating this session and producing a session URI, which it will communicate to the wallet, typically in the form of a QR code or a deep link. This processed is described in the [Session Creation](#session-creation) section.
Once a session is established between a dApp and a wallet, the dApp is able to send requests to the wallet. The wallet is responsible for listening for requests, performing the appropriate actions to fulfill requests, and sending responses back to the dApp with the results of requests. This process is described in the [Message Schema](#message-schema) section.
### Session Creation
[Section titled “Session Creation”](#session-creation)
The dApp is responsible for initializing a WalletConnect session and producing a WalletConnect URI that communicates the necessary session information to the wallet. This process is as described in the WalletConnect documentation , with one addition. In order for wallets to be able to easily and immediately recognize an Algorand WalletConnect session, dApps **SHOULD** add an additional URI query parameter to the WalletConnect URI. If present, the name of this parameter **MUST** be `algorand` and its value **MUST** be `true`. This query parameter can appear in any order relative to the other query parameters in the URI.
> For example, here is a standard WalletConnect URI:
>
> ```plaintext
> wc:4015f93f-b88d-48fc-8bfe-8b063cc325b6@1?bridge=https%3A%2F%2F9.bridge.walletconnect.org&key=b0576e0880e17f8400bfff92d4caaf2158cccc0f493dcf455ba76d448c9b5655
> ```
>
> And here is that same URI with the Algorand-specific query parameter:
>
> ```plaintext
> wc:4015f93f-b88d-48fc-8bfe-8b063cc325b6@1?bridge=https%3A%2F%2F9.bridge.walletconnect.org&key=b0576e0880e17f8400bfff92d4caaf2158cccc0f493dcf455ba76d448c9b5655&algorand=true
> ```
It is **RECOMMENDED** that dApps include this query parameter, but it is not **REQUIRED**. Wallets **MAY** reject sessions if the session URI does not contain this query parameter.
#### Chain IDs
[Section titled “Chain IDs”](#chain-ids)
WalletConnect v1 sessions are associated with a numeric chain ID. Since Algorand chains do not have numeric identifiers (instead, the genesis hash or ID is used for this purpose), this document defines the following chain IDs for the Algorand ecosystem:
* MainNet (genesis hash `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=`): 416001
* TestNet (genesis hash `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=`): 416002
* BetaNet (genesis hash `mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=`): 416003
At the time of writing, these chain IDs do not conflict with any known chain that also uses WalletConnect. In the unfortunate event that this were to happen, the `algorand` query parameter discussed above would be used to differentiate Algorand chains from others.
Future Algorand chains, if introduced, **MUST** be assigned new chain IDs.
Wallets and dApps **MAY** support all of the above chain IDs or only a subset of them. If a chain ID is presented to a wallet or dApp that does not support that chain ID, they **MUST** terminate the session.
For compatibility with WalletConnect usage prior to this ARC, the following catch-all chain ID is also defined:
* All Algorand Chains (legacy value): 4160
Wallets and dApps **SHOULD** support this chain ID as well for backwards compatibility. Unfortunately this ID alone is not enough to identify which Algorand chain is being used, so extra fields in message requests (i.e. the genesis hash field in a transaction to sign) **SHOULD** be consulted as well to determine this.
### Message Schema
[Section titled “Message Schema”](#message-schema)
Note: interfaces are defined in TypeScript. These interfaces are designed to be serializable to and from valid JSON objects.
The WalletConnect message schema is a set of JSON-RPC 2.0 requests and responses. Decentralized applications will send requests to the wallets and will receive responses as JSON-RPC messages. All requests **MUST** adhere to the following structure:
```typescript
interface JsonRpcRequest {
/**
* An identifier established by the Client. Numbers SHOULD NOT contain fractional parts.
*/
id: number;
/**
* A String specifying the version of the JSON-RPC protocol. MUST be exactly "2.0".
*/
jsonrpc: "2.0";
/**
* A String containing the name of the RPC method to be invoked.
*/
method: string;
/**
* A Structured value that holds the parameter values to be used during the invocation of the method.
*/
params: any[];
}
```
The Algorand WalletConnect schema consists of a single RPC method, `algo_signTxn`, as described in the following section.
All responses, whether successful or unsuccessful, **MUST** adhere to the following structure:
```typescript
interface JsonRpcResponse {
/**
* This member is REQUIRED.
* It MUST be the same as the value of the id member in the Request Object.
* If there was an error in detecting the id in the Request object (e.g. Parse error/Invalid Request), it MUST be Null.
*/
id: number;
/**
* A String specifying the version of the JSON-RPC protocol. MUST be exactly "2.0".
*/
jsonrpc: "2.0";
/**
* This member is REQUIRED on success.
* This member MUST NOT exist if there was an error invoking the method.
* The value of this member is determined by the method invoked on the Server.
*/
result?: any;
/**
* This member is REQUIRED on error.
* This member MUST NOT exist if the requested method was invoked successfully.
*/
error?: JsonRpcError;
}
interface JsonRpcError {
/**
* A Number that indicates the error type that occurred.
* This MUST be an integer.
*/
code: number;
/**
* A String providing a short description of the error.
* The message SHOULD be limited to a concise single sentence.
*/
message: string;
/**
* A Primitive or Structured value that contains additional information about the error.
* This may be omitted.
* The value of this member is defined by the Server (e.g. detailed error information, nested errors etc.).
*/
data?: any;
}
```
#### `algo_signTxn`
[Section titled “algo\_signTxn”](#algo_signtxn)
This request is used to ask a wallet to sign one or more transactions in one or more atomic groups.
##### Request
[Section titled “Request”](#request)
This request **MUST** adhere to the following structure:
```typescript
interface AlgoSignTxnRequest {
/**
* As described in JsonRpcRequest.
*/
id: number;
/**
* As described in JsonRpcRequest.
*/
jsonrpc: "2.0";
/**
* The method to invoke, MUST be "algo_signTxn".
*/
method: "algo_signTxn";
/**
* Parameters for the transaction signing request.
*/
params: SignTxnParams;
}
/**
* The first element is an array of `WalletTransaction` objects which contain the transaction(s) to be signed.
* If transactions from an atomic transaction group are being signed, then all transactions in the group (even the ones not being signed by the wallet) MUST appear in this array.
*
* The second element, if present, contains addition options specified with the `SignTxnOpts` structure.
*/
type SignTxnParams = [WalletTransaction[], SignTxnOpts?];
```
> `SignTxnParams` is a tuple with an optional element , meaning its length can be 1 or 2.
The [`WalletTransaction`](/arc-standards/arc-0001#interface-wallettransaction) and [`SignTxnOpts`](/arc-standards/arc-0001#interface-signtxnsopts) types are defined in [ARC-1](/arc-standards/arc-0001).
All specifications, restrictions, and guidelines declared in ARC-1 for these types apply to their usage here as well. Additionally, all security requirements and restrictions for processing transaction signing requests from ARC-1 apply to this request as well.
> For more information, see [ARC-1 - Syntax and Interfaces](/arc-standards/arc-0001#syntax-and-interfaces) and [ARC-1 - Semantic and Security Requirements](/arc-standards/arc-0001#semantic-and-security-requirements).
##### Response
[Section titled “Response”](#response)
To respond to a request, the wallet **MUST** send back the following response object:
```typescript
interface AlgoSignTxnResponse {
/**
* As described in JsonRpcResponse.
*/
id: number;
/**
* As described in JsonRpcResponse.
*/
jsonrpc: "2.0";
/**
* An array containing signed transactions at specific indexes.
*/
result?: Array;
/**
* As described in JsonRpcResponse.
*/
error?: JsonRpcError;
}
```
[`SignedTxnStr`](/arc-standards/arc-0001#interface-signedtxnstr) type is defined in [ARC-1](/arc-standards/arc-0001).
In this response, `result` **MUST** be an array with the same length as the number of `WalletTransaction`s in the request (i.e. `.params[0].length`). For every integer `i` such that `0 <= i < result.length`:
* If the transaction at index `i` in the group should be signed by the wallet (i.e. `.params[0][i].signers` is not an empty array): `result[i]` **MUST** be a base64-encoded string containing the msgpack-encoded signed transaction `params[0][i].txn`.
* Otherwise: `result[i]` **MUST** be `null`, since the wallet was not requested to sign this transaction.
If the wallet does not approve signing every transaction whose signature is being requested, the request **MUST** fail.
All request failures **MUST** use the error codes defined in [ARC-1 - Error Standards](/arc-standards/arc-0001#error-standards).
## Rationale
[Section titled “Rationale”](#rationale)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# URI scheme
> A specification for encoding Transactions in a URI format.
## Abstract
[Section titled “Abstract”](#abstract)
This URI specification represents a standardized way for applications and websites to send requests and information through deeplinks, QR codes, etc. It is heavily based on Bitcoin’s [BIP-0021](https://github.com/bitcoin/bips/blob/master/bip-0021.mediawiki) and should be seen as derivative of it. The decision to base it on BIP-0021 was made to make it easy and compatible as possible for any other application.
## Specification
[Section titled “Specification”](#specification)
### General format
[Section titled “General format”](#general-format)
Algorand URIs follow the general format for URIs as set forth in [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). The path component consists of an Algorand address, and the query component provides additional payment options.
Elements of the query component may contain characters outside the valid range. These must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986.
### ABNF Grammar
[Section titled “ABNF Grammar”](#abnf-grammar)
```plaintext
algorandurn = "algorand://" algorandaddress [ "?" algorandparams ]
algorandaddress = *base32
algorandparams = algorandparam [ "&" algorandparams ]
algorandparam = [ amountparam / labelparam / noteparam / assetparam / otherparam ]
amountparam = "amount=" *digit
labelparam = "label=" *qchar
assetparam = "asset=" *digit
noteparam = (xnote | note)
xnote = "xnote=" *qchar
note = "note=" *qchar
otherparam = qchar *qchar [ "=" *qchar ]
```
Here, “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators.
The scheme component (“algorand:”) is case-insensitive, and implementations must accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys.
!!! Caveat When it comes to generation of an address’ QR, many exchanges and wallets encodes the address w/o the scheme component (“algorand:”). This is not a URI so it is OK.
### Query Keys
[Section titled “Query Keys”](#query-keys)
* label: Label for that address (e.g. name of receiver)
* address: Algorand address
* xnote: A URL-encoded notes field value that must not be modifiable by the user when displayed to users.
* note: A URL-encoded default notes field value that the the user interface may optionally make editable by the user.
* amount: microAlgos or smallest unit of asset
* asset: The asset id this request refers to (if Algos, simply omit this parameter)
* (others): optional, for future extensions
### Transfer amount/size
[Section titled “Transfer amount/size”](#transfer-amountsize)
!!! Note This is DIFFERENT than Bitcoin’s BIP-0021
If an amount is provided, it MUST be specified in basic unit of the asset. For example, if it’s Algos (Algorand native unit), the amount should be specified in microAlgos. All amounts MUST NOT contain commas nor a period (.) Strictly non negative integers.
e.g. for 100 Algos, the amount needs to be 100000000, for 54.1354 Algos the amount needs to be 54135400.
Algorand Clients should display the amount in whole Algos. Where needed, microAlgos can be used as well. In any case, the units shall be clear for the user.
### Appendix
[Section titled “Appendix”](#appendix)
This section contains several examples
address -
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4
```
address with label -
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?label=Silvio
```
Request 150.5 Algos from an address
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150500000
```
Request 150 units of Asset ID 45 from an address
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150&asset=45
```
## Rationale
[Section titled “Rationale”](#rationale)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Provider Message Schema
> A comprehensive message schema for communication between clients and providers.
## Abstract
[Section titled “Abstract”](#abstract)
Building off of the work of the previous ARCs relating to; provider transaction signing ([ARC-0005](/arc-standards/arc-0005#specification)), provider address discovery ([ARC-0006](/arc-standards/arc-0006#specification)), provider transaction network posting ([ARC-0007](/arc-standards/arc-0007#specification)) and provider transaction signing & posting ([ARC-0008](/arc-standards/arc-0008#specification)), this proposal aims to comprehensively outline a common message schema between clients and providers.
Furthermore, this proposal extends the aforementioned methods to encompass new functionality such as:
* Extending the message structure to target specific networks, thereby supporting multiple AVM (Algorand Virtual Machine) chains.
* Adding a new method that disables clients on providers.
* Adding a new method to discover provider capabilities, such as what networks and methods are supported.
This proposal serves as a formalization of the message schema and leaves the implementation details to the prerogative of the clients and providers.
[Back to top ^](/arc-standards/arc-0027#abstract)
## Motivation
[Section titled “Motivation”](#motivation)
The previous ARCs relating to client/provider communication ([ARC-0005](/arc-standards/arc-0005), [ARC-0006](/arc-standards/arc-0006), [ARC-0007](/arc-standards/arc-0007) and [ARC-0008](/arc-standards/arc-0008) serve as the foundation of this proposal. However, this proposal attempts to bring these previous ARCs together and extend their functionality as some of the previous formats did not allow for very much robustness when it came to targeting a specific AVM chain.
More methods have been added in an attempt to “fill in the gaps” of the previous client/provider communication ARCS.
[Back to top ^](/arc-standards/arc-0027#abstract)
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
### Definitions
[Section titled “Definitions”](#definitions)
This section is non-normative.
* Client
* An end-user application that interacts with a provider; e.g. a dApp.
* Provider
* An application that manages private keys and performs signing operations; e.g. a wallet.
[Back to top ^](/arc-standards/arc-0027#abstract)
### Message Reference Naming
[Section titled “Message Reference Naming”](#message-reference-naming)
In order for each message to be identifiable, each message **MUST** contain a `reference` property. Furthermore, this `reference` property **MUST** conform to the following naming convention:
```plaintext
[namespace]:[method]:[type]
```
where:
* `namespace`:
* **MUST** be `arc0027`
* `method`:
* **MUST** be in snake case
* **MUST** be one of `disable`, `discover`, `enable`, `post_transactions`, `sign_and_post_transactions`, `sign_message` or `sign_transactions`
* `type`:
* **MUST** be one of `request` or `response`
This convention ensures that each message can be identified and handled.
[Back to top ^](/arc-standards/arc-0027#abstract)
### Supported Methods
[Section titled “Supported Methods”](#supported-methods)
| Name | Summary | Example |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- |
| `disable` | Removes access for the clients on the provider. What this looks like is the prerogative of the provider. | [here](#disable-example) |
| `discover` | Sent by a client to discover the available provider(s). If the `params.providerId` property is supplied, only the provider with the matching ID **SHOULD** respond. This method is usually called before other methods as it allows the client to identify provider(s), the networks the provider(s) supports and the methods the provider(s) supports on each network. | [here](#discover-example) |
| `enable` | Requests that a provider allow a client access to the providers’ accounts. The response **MUST** return a user-curated list of available addresses. Providers **SHOULD** create a “session” for the requesting client, what this should look like is the prerogative of the provider(s) and is beyond the scope of this proposal. | [here](#enable-example) |
| `post_transactions` | Sends a list of signed transactions to be posted to the network by the provider. | [here](#post-transactions-example) |
| `sign_and_post_transactions` | Sends a list of signed transactions to be posted to the network by the provider. | [here](#sign-and-post-transactions-example) |
| `sign_message` | Sends a UTF-8 encoded message to be signed by the provider. | [here](#sign-message-example) |
| `sign_transactions` | Sends a list of transactions to be signed by the provider. | [here](#sign-transactions-example) |
[Back to top ^](/arc-standards/arc-0027#abstract)
### Request Message Schema
[Section titled “Request Message Schema”](#request-message-schema)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/request-message",
"title": "Request Message",
"description": "Outlines the structure of a request message",
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "A globally unique identifier for the message",
"format": "uuid"
},
"reference": {
"description": "Identifies the purpose of the message",
"enum": [
"arc0027:disable:request",
"arc0027:discover:request",
"arc0027:enable:request",
"arc0027:post_transactions:request",
"arc0027:sign_and_post_transactions:request",
"arc0027:sign_message:request",
"arc0027:sign_transactions:request"
]
}
},
"allOf": [
{
"if": {
"properties": {
"reference": {
"const": "arc0027:disable:request"
}
},
"required": ["id", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/disable-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:discover:request"
}
},
"required": ["id", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/discover-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:enable:request"
}
},
"required": ["id", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/enable-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:post_transactions:request"
}
},
"required": ["id", "params", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/post-transactions-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_and_post_transactions:request"
}
},
"required": ["id", "params", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/sign-and-post-transactions-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_message:request"
}
},
"required": ["id", "params", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/sign-message-params"
}
}
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_transactions:request"
}
},
"required": ["id", "params", "reference"]
},
"then": {
"properties": {
"params": {
"$ref": "/schemas/sign-transactions-params"
}
}
}
}
]
}
```
where:
* `id`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `reference`:
* **MUST** be a string that conforms to the [message reference naming](#message-reference-naming) convention
[Back to top ^](/arc-standards/arc-0027#abstract)
#### Param Definitions
[Section titled “Param Definitions”](#param-definitions)
##### Disable Params
[Section titled “Disable Params”](#disable-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/disable-params",
"title": "Disable Params",
"description": "Disables a previously enabled client with any provider(s)",
"type": "object",
"properties": {
"genesisHash": {
"type": "string",
"description": "The unique identifier for the network that is the hash of the genesis block"
},
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"sessionIds": {
"type": "array",
"description": "A list of specific session IDs to remove",
"items": {
"type": "string"
}
}
},
"required": ["providerId"]
}
```
where:
* `genesisHash`:
* **OPTIONAL** if omitted, the provider **SHOULD** assume the “default” network
* **MUST** be a base64 encoded hash of the genesis block of the network
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `sessionIds`:
* **OPTIONAL** if omitted, all sessions must be removed
* **MUST** remove all sessions if the list is empty
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Discover Params
[Section titled “Discover Params”](#discover-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/discover-params",
"title": "Discover Params",
"description": "Gets a list of available providers",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
}
}
}
```
where:
* `providerId`:
* **OPTIONAL** if omitted, all providers **MAY** respond
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Enable Params
[Section titled “Enable Params”](#enable-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/enable-params",
"title": "Enable Params",
"description": "Asks provider(s) to enable the requesting client",
"type": "object",
"properties": {
"genesisHash": {
"type": "string",
"description": "The unique identifier for the network that is the hash of the genesis block"
},
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
}
},
"required": ["providerId"]
}
```
where:
* `genesisHash`:
* **OPTIONAL** if omitted, the provider **SHOULD** assume the “default” network
* **MUST** be a base64 encoded hash of the genesis block of the network
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Post Transactions Params
[Section titled “Post Transactions Params”](#post-transactions-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/post-transactions-params",
"title": "Post Transactions Params",
"description": "Sends a list of signed transactions to be posted to the network by the provider(s)",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"stxns": {
"type": "array",
"description": "A list of signed transactions to be posted to the network by the provider(s)",
"items": {
"type": "string"
}
}
},
"required": [
"providerId",
"stxns"
]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `stxns`:
* **MUST** be the base64 encoding of the canonical msgpack encoding of a signed transaction as defined in [ARC-1](/arc-standards/arc-0001#interface-signedtxnstr)
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign And Post Transactions Params
[Section titled “Sign And Post Transactions Params”](#sign-and-post-transactions-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-and-post-transactions-params",
"title": "Sign And Post Transactions Params",
"description": "Sends a list of transactions to be signed and posted to the network by the provider(s)",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"txns": {
"type": "array",
"description": "A list of transactions to be signed and posted to the network by the provider(s)",
"items": {
"type": "object",
"properties": {
"authAddr": {
"type": "string",
"description": "The auth address if the sender has rekeyed"
},
"msig": {
"type": "object",
"description": "Extra metadata needed when sending multisig transactions",
"properties": {
"addrs": {
"type": "array",
"description": "A list of Algorand addresses representing possible signers for the multisig",
"items": {
"type": "string"
}
},
"threshold": {
"type": "integer",
"description": "Multisig threshold value"
},
"version": {
"type": "integer",
"description": "Multisig version"
}
}
},
"signers": {
"type": "array",
"description": "A list of addresses to sign with",
"items": {
"type": "string"
}
},
"stxn": {
"type": "string",
"description": "The base64 encoded signed transaction"
},
"txn": {
"type": "string",
"description": "The base64 encoded unsigned transaction"
}
},
"required": ["txn"]
}
}
},
"required": [
"providerId",
"txns"
]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `txns`:
* **MUST** have each item conform to the semantic of a transaction in [ARC-1](/arc-standards/arc-0001#semantic-of-wallettransaction)
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign Message Params
[Section titled “Sign Message Params”](#sign-message-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-message-params",
"title": "Sign Message Params",
"description": "Sends a UTF-8 encoded message to be signed by the provider(s)",
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "The string to be signed by the provider"
},
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"signer": {
"type": "string",
"description": "The address to be used to sign the message"
}
},
"required": [
"message",
"providerId"
]
}
```
where:
* `message`:
* **MUST** be a string that is compatible with the UTF-8 character set as defined in [RFC-2279](https://www.rfc-editor.org/rfc/rfc2279)
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `signer`:
* **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in [keys and addresses](https://developer.algorand.org/docs/get-details/accounts/#keys-and-addresses)
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign Transactions Params
[Section titled “Sign Transactions Params”](#sign-transactions-params)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-transactions-params",
"title": "Sign Transactions Params",
"description": "Sends a list of transactions to be signed by the provider(s)",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"txns": {
"type": "array",
"description": "A list of transactions to be signed by the provider(s)",
"items": {
"type": "object",
"properties": {
"authAddr": {
"type": "string",
"description": "The auth address if the sender has rekeyed"
},
"msig": {
"type": "object",
"description": "Extra metadata needed when sending multisig transactions",
"properties": {
"addrs": {
"type": "array",
"description": "A list of Algorand addresses representing possible signers for the multisig",
"items": {
"type": "string"
}
},
"threshold": {
"type": "integer",
"description": "Multisig threshold value"
},
"version": {
"type": "integer",
"description": "Multisig version"
}
}
},
"signers": {
"type": "array",
"description": "A list of addresses to sign with",
"items": {
"type": "string"
}
},
"stxn": {
"type": "string",
"description": "The base64 encoded signed transaction"
},
"txn": {
"type": "string",
"description": "The base64 encoded unsigned transaction"
}
},
"required": ["txn"]
}
}
},
"required": [
"providerId",
"txns"
]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `txns`:
* **MUST** have each item conform to the semantic of a transaction in [ARC-1](/arc-standards/arc-0001#semantic-of-wallettransaction)
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
### Response Message Schema
[Section titled “Response Message Schema”](#response-message-schema)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/response-message",
"title": "Response Message",
"description": "Outlines the structure of a response message",
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "A globally unique identifier for the message",
"format": "uuid"
},
"reference": {
"description": "Identifies the purpose of the message",
"enum": [
"arc0027:disable:response",
"arc0027:discover:response",
"arc0027:enable:response",
"arc0027:post_transactions:response",
"arc0027:sign_and_post_transactions:response",
"arc0027:sign_message:response",
"arc0027:sign_transactions:response"
]
},
"requestId": {
"type": "string",
"description": "The ID of the request message",
"format": "uuid"
}
},
"allOf": [
{
"if": {
"properties": {
"reference": {
"const": "arc0027:disable:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/disable-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:discover:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/discover-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:enable:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/enable-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:post_transactions:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/post-transactions-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_and_post_transactions:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/sign-and-post-transactions-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_message:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/sign-message-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
},
{
"if": {
"properties": {
"reference": {
"const": "arc0027:sign_transactions:response"
}
},
"required": ["id", "reference", "requestId"]
},
"then": {
"oneOf": [
{
"properties": {
"result": {
"$ref": "/schemas/sign-transactions-result"
}
}
},
{
"properties": {
"error": {
"$ref": "/schemas/error"
}
}
}
]
}
}
]
}
```
* `id`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* `reference`:
* **MUST** be a string that conforms to the [message reference naming](#message-reference-naming) convention
* `requestId`:
* **MUST** be the ID of the origin request message
[Back to top ^](/arc-standards/arc-0027#abstract)
#### Result Definitions
[Section titled “Result Definitions”](#result-definitions)
##### Disable Result
[Section titled “Disable Result”](#disable-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/disable-result",
"title": "Disable Result",
"description": "The response from a disable request",
"type": "object",
"properties": {
"genesisHash": {
"type": "string",
"description": "The unique identifier for the network that is the hash of the genesis block"
},
"genesisId": {
"type": "string",
"description": "A human-readable identifier for the network"
},
"providerId": {
"type": "number",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"sessionIds": {
"type": "array",
"description": "A list of specific session IDs that have been removed",
"items": {
"type": "string"
}
}
},
"required": [
"genesisHash",
"genesisId",
"providerId"
]
}
```
where:
* `genesisHash`:
* **MUST** be a base64 encoded hash of the genesis block of the network
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Discover Result
[Section titled “Discover Result”](#discover-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/discover-result",
"title": "Discover Result",
"description": "The response from a discover request",
"type": "object",
"properties": {
"host": {
"type": "string",
"description": "A domain name of the provider"
},
"icon": {
"type": "string",
"description": "A URI pointing to an image"
},
"name": {
"type": "string",
"description": "A human-readable canonical name of the provider"
},
"networks": {
"type": "array",
"description": "A list of networks available for the provider",
"items": {
"type": "object",
"properties": {
"genesisHash": {
"type": "string",
"description": "The unique identifier for the network that is the hash of the genesis block"
},
"genesisId": {
"type": "string",
"description": "A human-readable identifier for the network"
},
"methods": {
"type": "array",
"description": "A list of methods available from the provider for the chain",
"items": {
"enum": [
"disable",
"enable",
"post_transactions",
"sign_and_post_transactions",
"sign_message",
"sign_transactions"
]
}
}
},
"required": [
"genesisHash",
"genesisId",
"methods"
]
}
},
"providerId": {
"type": "string",
"description": "A globally unique identifier for the provider",
"format": "uuid"
}
},
"required": [
"name",
"networks",
"providerId"
]
}
```
where:
* `host`:
* **RECOMMENDED** a URL that points to a live website
* `icon`:
* **RECOMMENDED** be a URI that conforms to [\[RFC-2397\]](https://www.rfc-editor.org/rfc/rfc2397)
* **SHOULD** be a URI that points to a square image with a 96x96px minimum resolution
* **RECOMMENDED** image format to be either lossless or vector based such as PNG, WebP or SVG
* `name`:
* **SHOULD** be human-readable to allow for display to a user
* `networks`:
* **MAY** be empty
* `networks.genesisHash`:
* **MUST** be a base64 encoded hash of the genesis block of the network
* `networks.methods`:
* **SHOULD** be one or all of `disable`, `enable`, `post_transactions`, `sign_and_post_transactions`, `sign_message` or `sign_transactions`
* **MAY** be empty
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Enable Result
[Section titled “Enable Result”](#enable-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/enable-result",
"title": "Enable Result",
"description": "The response from an enable request",
"type": "object",
"properties": {
"accounts": {
"type": "array",
"description": "A list of accounts available for the provider",
"items": {
"type": "object",
"properties": {
"address": {
"type": "string",
"description": "The address of the account"
},
"name": {
"type": "string",
"description": "A human-readable name for this account"
}
},
"required": ["address"]
}
},
"genesisHash": {
"type": "string",
"description": "The unique identifier for the network that is the hash of the genesis block"
},
"genesisId": {
"type": "string",
"description": "A human-readable identifier for the network"
},
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"sessionId": {
"type": "string",
"description": "A globally unique identifier for the session as defined by the provider"
}
},
"required": [
"accounts",
"genesisHash",
"genesisId",
"providerId"
]
}
```
where:
* `accounts`:
* **MAY** be empty
* `accounts.address`:
* **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in [keys and addresses](https://developer.algorand.org/docs/get-details/accounts/#keys-and-addresses)
* `genesisHash`:
* **MUST** be a base64 encoded hash of the genesis block of the network
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
* `sessionId`:
* **RECOMMENDED** to be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Post Transactions Result
[Section titled “Post Transactions Result”](#post-transactions-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/post-transactions-result",
"title": "Post Transactions Result",
"description": "The response from a post transactions request",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"txnIDs": {
"type": "array",
"description": "A list of IDs for all of the transactions posted to the network",
"items": {
"type": "string"
}
}
},
"required": [
"providerId",
"txnIDs"
]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
* `txnIDs`:
* **MUST** contain items that are a 52-character base32 string (without padding) corresponding to a 32-byte string transaction ID
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign And Post Transactions Result
[Section titled “Sign And Post Transactions Result”](#sign-and-post-transactions-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-and-post-transactions-result",
"title": "Sign And Post Transactions Result",
"description": "The response from a sign and post transactions request",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"txnIDs": {
"type": "array",
"description": "A list of IDs for all of the transactions posted to the network",
"items": {
"type": "string"
}
}
},
"required": [
"providerId",
"txnIDs"
]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
* `txnIDs`:
* **MUST** contain items that are a 52-character base32 string (without padding) corresponding to a 32-byte string transaction ID
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign Message Result
[Section titled “Sign Message Result”](#sign-message-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-message-result",
"title": "Sign Message Result",
"description": "The response from a sign message request",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"signature": {
"type": "string",
"description": "The signature of the signed message signed by the private key of the intended signer"
},
"signer": {
"type": "string",
"description": "The address of the signer used to sign the message"
}
},
"required": ["providerId", "signature", "signer"]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
* `signature`:
* **MUST** be a base64 encoded string
* `signer`:
* **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in [keys and addresses](https://developer.algorand.org/docs/get-details/accounts/#keys-and-addresses)
[Back to top ^](/arc-standards/arc-0027#abstract)
##### Sign Transactions Result
[Section titled “Sign Transactions Result”](#sign-transactions-result)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/sign-transactions-result",
"title": "Sign Transactions Result",
"description": "The response from a sign transactions request",
"type": "object",
"properties": {
"providerId": {
"type": "string",
"description": "A unique identifier for the provider",
"format": "uuid"
},
"stxns": {
"type": "array",
"description": "A list of signed transactions that is ready to be posted to the network",
"items": {
"type": "string"
}
}
},
"required": ["providerId", "stxns"]
}
```
where:
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** uniquely identify the provider
* `stxns`:
* **MUST** be the base64 encoding of the canonical msgpack encoding of a signed transaction as defined in [ARC-1](/arc-standards/arc-0001#interface-signedtxnstr)
* **MAY** be empty
[Back to top ^](/arc-standards/arc-0027#abstract)
#### Error Definition
[Section titled “Error Definition”](#error-definition)
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "/schemas/error",
"title": "Error",
"description": "Details the type of error and a human-readable message that can be displayed to the user",
"type": "object",
"properties": {
"code": {
"description": "An integer that defines the type of error",
"enum": [
4000,
4001,
4002,
4003,
4004,
4100,
4200,
4201,
4300
]
},
"data": {
"type": "object",
"description": "Additional information about the error"
},
"message": {
"type": "string",
"description": "A human-readable message about the error"
},
"providerId": {
"type": "number",
"description": "A unique identifier for the provider",
"format": "uuid"
}
},
"required": [
"code",
"message"
]
}
```
where:
* `code`:
* **MUST** be a code of one of the [errors](#errors)
* `message`:
* **SHOULD** be human-readable to allow for display to a user
* `providerId`:
* **MUST** be a [UUIDv4](https://www.rfc-editor.org/rfc/rfc4122) compliant string
* **MUST** be present if the error originates from the provider
[Back to top ^](/arc-standards/arc-0027#abstract)
### Errors
[Section titled “Errors”](#errors)
#### Summary
[Section titled “Summary”](#summary)
| Code | Name | Summary |
| ---- | ------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
| 4000 | [`UnknownError`](#4000-unknownerror) | The default error response, usually indicates something is not quite right. |
| 4001 | [`MethodCanceledError`](#4001-methodcancelederror) | When a user has rejected the method. |
| 4002 | [`MethodTimedOutError`](#4002-methodtimedouterror) | The requested method has timed out. |
| 4003 | [`MethodNotSupportedError`](#4003-methodnotsupportederror) | The provider does not support this method. |
| 4004 | [`NetworkNotSupportedError`](#4004-networknotsupportederror) | Network is not supported. |
| 4100 | [`UnauthorizedSignerError`](#4100-unauthorizedsignererror) | The provider has not given permission to use a specified signer. |
| 4200 | [`InvalidInputError`](#4200-invalidinputerror) | The input for signing transactions is malformed. |
| 4201 | [`InvalidGroupIdError`](#4201-invalidgroupiderror) | The computed group ID of the atomic transactions is different from the assigned group ID. |
| 4300 | [`FailedToPostSomeTransactionsError`](#4300-failedtopostsometransactionserror) | When some transactions were not sent properly. |
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4000 `UnknownError`
[Section titled “4000 UnknownError”](#4000-unknownerror)
This error is the default error and serves as the “catch all” error. This usually occurs when something has happened that is outside the bounds of graceful handling. You can check the `UnknownError.message` string for more information.
The code **MUST** be 4000.
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4001 `MethodCanceledError`
[Section titled “4001 MethodCanceledError”](#4001-methodcancelederror)
This error is thrown when a user has rejected or canceled the requested method on the provider. For example, the user decides to cancel the signing of a transaction.
**Additional Data**
| Name | Type | Value | Description |
| ------ | -------- | ----- | ----------------------------------------- |
| method | `string` | - | The name of the method that was canceled. |
The code **MUST** be 4001.
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4002 `MethodTimedOutError`
[Section titled “4002 MethodTimedOutError”](#4002-methodtimedouterror)
This can be thrown by most methods and indicates that the method has timed out.
**Additional Data**
| Name | Type | Value | Description |
| ------ | -------- | ----- | -------------------------------------- |
| method | `string` | - | The name of the method that timed out. |
The code **MUST** be 4002.
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4003 `MethodNotSupportedError`
[Section titled “4003 MethodNotSupportedError”](#4003-methodnotsupportederror)
This can be thrown by most methods and indicates that the provider does not support the method you are trying to perform.
The code **MUST** be 4003.
**Additional Data**
| Name | Type | Value | Description |
| ------ | -------- | ----- | --------------------------------------------- |
| method | `string` | - | The name of the method that is not supported. |
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4004 `NetworkNotSupportedError`
[Section titled “4004 NetworkNotSupportedError”](#4004-networknotsupportederror)
This error is thrown when the requested genesis hash is not supported by the provider.
The code **MUST** be 4004.
**Additional Data**
| Name | Type | Value | Description |
| ----------- | -------- | ----- | ------------------------------------------------------ |
| genesisHash | `string` | - | The genesis hash of the network that is not supported. |
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4100 `UnauthorizedSignerError`
[Section titled “4100 UnauthorizedSignerError”](#4100-unauthorizedsignererror)
This error is thrown when a provided account has been specified, but the provider has not given permission to use that account as a signer.
The code **MUST** be 4100.
**Additional Data**
| Name | Type | Value | Description |
| ------ | -------- | ----- | ------------------------------------------------- |
| signer | `string` | - | The address of the signer that is not authorized. |
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4200 `InvalidInputError`
[Section titled “4200 InvalidInputError”](#4200-invalidinputerror)
This error is thrown when the provider attempts to sign transaction(s), but the input is malformed.
The code **MUST** be 4200.
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4201 `InvalidGroupIdError`
[Section titled “4201 InvalidGroupIdError”](#4201-invalidgroupiderror)
This error is thrown when the provider attempts to sign atomic transactions in which the computed group ID is different from the assigned group ID.
The code **MUST** be 4301.
**Additional Data**
| Name | Type | Value | Description |
| --------------- | -------- | ----- | ---------------------------------------------------- |
| computedGroupId | `string` | - | The computed ID of the supplied atomic transactions. |
[Back to top ^](/arc-standards/arc-0027#abstract)
#### 4300 `FailedToPostSomeTransactionsError`
[Section titled “4300 FailedToPostSomeTransactionsError”](#4300-failedtopostsometransactionserror)
This error is thrown when some transactions failed to be posted to the network.
The code **MUST** be 4300.
**Additional Data**
| Name | Type | Value | Description |
| ------------- | -------------------- | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| successTxnIDs | `(string \| null)[]` | - | This will correspond to the `stxns` list sent in `post_transactions` & `sign_and_post_transactions` and will contain the ID of those transactions that were successfully committed to the blockchain, or null if they failed. |
[Back to top ^](/arc-standards/arc-0027#abstract)
## Rationale
[Section titled “Rationale”](#rationale)
An original vision for Algorand was that multiple AVM chains could co-exist. Extending the base of each message schema with a targeted network (referenced by its genesis hash) ensures the schema can remain AVM chain-agnostic and adapted to work with any AVM-compatible chain.
The schema adds a few more methods that are not mentioned in previous ARCs and the inception of these methods are born out of the need that has been seen by providers, and clients alike.
The latest JSON schema (as of writing is the [2020-12](https://json-schema.org/draft/2020-12/draft-bhutton-json-schema-01) draft) was chosen as the format due to the widely supported use across multiple platforms & languages, and due to its popularity.
[Back to top ^](/arc-standards/arc-0027#abstract)
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### Disable Example
[Section titled “Disable Example”](#disable-example)
**Request**
```json
{
"id": "e44f5bde-37f4-44b0-94d5-1daff41bc984d",
"params": {
"genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=",
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"sessionIds": ["ab476381-c1f4-4665-b89c-9f386fb6f15d", "7b02d412-6a27-4d97-b091-d5c26387e644"]
},
"reference": "arc0027:disable:request"
}
```
**Response**
```json
{
"id": "e6696507-6a6c-4df8-98c4-356d5351207c",
"reference": "arc0027:disable:response",
"requestId": "e44f5bde-37f4-44b0-94d5-1daff41bc984d",
"result": {
"genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=",
"genesisId": "testnet-v1.0",
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"sessionIds": ["ab476381-c1f4-4665-b89c-9f386fb6f15d", "7b02d412-6a27-4d97-b091-d5c26387e644"]
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Discover Example
[Section titled “Discover Example”](#discover-example)
**Request**
```json
{
"id": "5d5186fc-2091-4e88-8ef9-05a5d4da24ed",
"params": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa"
},
"reference": "arc0027:discover:request"
}
```
**Response**
```json
{
"id": "6695f990-e3d7-41c4-bb26-64ab8da0653b",
"reference": "arc0027:discover:response",
"requestId": "5d5186fc-2091-4e88-8ef9-05a5d4da24ed",
"result": {
"host": "https://awesome-wallet.com",
"icon": "data:image/png;base64,iVBORw0KGgoAAAANSUh...",
"name": "Awesome Wallet",
"networks": [
{
"genesisHash": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=",
"genesisId": "mainnet-v1.0",
"methods": [
"disable",
"enable",
"post_transactions",
"sign_and_post_transactions",
"sign_message",
"sign_transactions"
]
},
{
"genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=",
"genesisId": "testnet-v1.0",
"methods": [
"disable",
"enable",
"post_transactions",
"sign_message",
"sign_transactions"
]
}
],
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa"
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Enable Example
[Section titled “Enable Example”](#enable-example)
**Request**
```json
{
"id": "4dd4ccdf-a918-4e33-a675-073330db4c99",
"params": {
"genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=",
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa"
},
"reference": "arc0027:enable:request"
}
```
**Response**
```json
{
"id": "cdf43d9e-1158-400b-b2fb-ba45e39548ff",
"reference": "arc0027:enable:response",
"requestId": "4dd4ccdf-a918-4e33-a675-073330db4c99",
"result": {
"accounts": [{
"address": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA",
"name": "Main Account"
}],
"genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=",
"genesisId": "testnet-v1.0",
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"sessionId": "6eb74cf1-93e8-400c-94b5-4928807a3ab1"
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Post Transactions Example
[Section titled “Post Transactions Example”](#post-transactions-example)
**Request**
```json
{
"id": "e555ccb3-4730-474c-92e3-1e42868e0c0d",
"params": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"stxns": [
"iaNhbXT..."
]
},
"reference": "arc0027:post_transactions:request"
}
```
**Response**
```json
{
"id": "13b115fb-2966-4a21-b6f7-8aca118ac008",
"reference": "arc0027:post_transactions:response",
"requestId": "e555ccb3-4730-474c-92e3-1e42868e0c0d",
"result": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"txnIDs": [
"H2KKVI..."
]
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Sign And Post Transactions Example
[Section titled “Sign And Post Transactions Example”](#sign-and-post-transactions-example)
**Request**
```json
{
"id": "43adafeb-d455-4264-a1c0-d86d9e1d75d9",
"params": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"txns": [
{
"txn": "iaNhbXT..."
},
{
"txn": "iaNhbXT...",
"signers": []
}
]
},
"reference": "arc0027:sign_and_post_transactions:request"
}
```
**Response**
```json
{
"id": "973df300-f149-4004-9718-b04b5f3991bd",
"reference": "arc0027:sign_and_post_transactions:response",
"requestId": "43adafeb-d455-4264-a1c0-d86d9e1d75d9",
"result": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"stxns": [
"iaNhbXT...",
null
]
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Sign Message Example
[Section titled “Sign Message Example”](#sign-message-example)
**Request**
```json
{
"id": "8f4aa9e5-d039-4272-95ac-6e972967e0cb",
"params": {
"message": "Hello humie!",
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"signer": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA"
},
"reference": "arc0027:sign_message:request"
}
```
**Response**
```json
{
"id": "9bdf72bf-218e-462a-8f64-3a40ef4a4963",
"reference": "arc0027:sign_message:response",
"requestId": "8f4aa9e5-d039-4272-95ac-6e972967e0cb",
"result": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"signature": "iaNhbXT...",
"signer": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA"
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
### Sign Transactions Example
[Section titled “Sign Transactions Example”](#sign-transactions-example)
**Request**
```json
{
"id": "464e6b88-8860-403c-891d-7de6d0425686",
"params": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"txns": [
{
"txn": "iaNhbXT..."
},
{
"txn": "iaNhbXT...",
"signers": []
}
]
},
"reference": "arc0027:sign_transactions:request"
}
```
**Response**
```json
{
"id": "f5a56135-5cd2-4f3f-8757-7b89d32d67e0",
"reference": "arc0027:sign_transactions:response",
"requestId": "464e6b88-8860-403c-891d-7de6d0425686",
"result": {
"providerId": "85533948-4d0b-4727-904e-dd35305d49aa",
"stxns": [
"iaNhbXT...",
null
]
}
}
```
[Back to top ^](/arc-standards/arc-0027#abstract)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
As this ARC only serves as the formalization of the message schema, the end-to-end security of the actual messages is beyond the scope of this ARC. It is **RECOMMENDED** that another ARC be proposed to advise in this topic, with reference to this ARC.
[Back to top ^](/arc-standards/arc-0027#abstract)
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
[Back to top ^](/arc-standards/arc-0027#abstract)
# Algorand Event Log Spec
> A methodology for structured logging by Algorand dapps.
## Abstract
[Section titled “Abstract”](#abstract)
Algorand dapps can use the [`log`](https://developer.algorand.org/docs/get-details/dapps/avm/teal/opcodes/#log) primitive to attach information about an application call. This ARC proposes the concept of Events, which are merely a way in which data contained in these logs may be categorized and structured.
In short: to emit an Event, a dapp calls `log` with ABI formatting of the log data, and a 4-byte prefix to indicate which Event it is.
## Specification
[Section titled “Specification”](#specification)
Each kind of Event emitted by a given dapp has a unique 4-byte identifier. This identifier is derived from its name and the structure of its contents, like so:
### Event Signature
[Section titled “Event Signature”](#event-signature)
An Event Signature is a utf8 string, comprised of: the name of the event, followed by an open paren, followed by the comma-separated names of the data types contained in the event (Types supported are the same as in [ARC-4](/arc-standards/arc-0004#types)), followed by a close paren. This follows naming conventions similar to ABI signatures, but does not include the return type.
### Deriving the 4-byte prefix from the Event Signature
[Section titled “Deriving the 4-byte prefix from the Event Signature”](#deriving-the-4-byte-prefix-from-the-event-signature)
To derive the 4-byte prefix from the Event Signature, perform the `sha512/256` hash algorithm on the signature, and select the first 4 bytes of the result.
This is the same process that is used by the [ABI Method Selector ](/arc-standards/arc-0004#method-selector)as specified in ARC-4.
### Argument Encoding
[Section titled “Argument Encoding”](#argument-encoding)
The arguments to a tuple **MUST** be encoded as if they were a single [ARC-4](/arc-standards/arc-0004) tuple (opposed to concatenating the encoded values together). For example, an event signature `foo(string,string)` would contain the 4-byte prefix and a `(string,string)` encoded byteslice.
### ARC-4 Extension
[Section titled “ARC-4 Extension”](#arc-4-extension)
#### Event
[Section titled “Event”](#event)
An event is represented as follow:
```typescript
interface Event {
/** The name of the event */
name: string;
/** Optional, user-friendly description for the event */
desc?: string;
/** The arguments of the event, in order */
args: Array<{
/** The type of the argument */
type: string;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
}>;
}
```
#### Method
[Section titled “Method”](#method)
This ARC extends ARC-4 by adding an array events of type `Event[]` to the `Method` interface. Concretely, this give the following extended Method interface:
```typescript
interface Method {
/** The name of the method */
name: string;
/** Optional, user-friendly description for the method */
desc?: string;
/** The arguments of the method, in order */
args: Array<{
/** The type of the argument */
type: string;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
}>;
/** All of the events that the method use */
events: Event[];
/** Information about the method's return value */
returns: {
/** The type of the return value, or "void" to indicate no return value. */
type: string;
/** Optional, user-friendly description for the return value */
desc?: string;
};
}
```
#### Contract
[Section titled “Contract”](#contract)
> Even if events are already inside `Method`, the contract **MUST** provide an array of `Events` to improve readability.
```typescript
interface Contract {
/** A user-friendly name for the contract */
name: string;
/** Optional, user-friendly description for the interface */
desc?: string;
/**
* Optional object listing the contract instances across different networks
*/
networks?: {
/**
* The key is the base64 genesis hash of the network, and the value contains
* information about the deployed contract in the network indicated by the
* key
*/
[network: string]: {
/** The app ID of the deployed contract in this network */
appID: number;
}
}
/** All of the methods that the contract implements */
methods: Method[];
/** All of the events that the contract contains */
events: Event[];
}
```
## Rationale
[Section titled “Rationale”](#rationale)
Event logging allows a dapp to convey useful information about the things it is doing. Well-designed Event logs allow observers to more easily interpret the history of interactions with the dapp. A structured approach to Event logging could also allow for indexers to more efficiently store and serve queryable data exposed by the dapp about its history.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### Sample interpretation of Event log data
[Section titled “Sample interpretation of Event log data”](#sample-interpretation-of-event-log-data)
An exchange dapp might emit a `Swapped` event with two `uint64` values representing quantities of currency swapped. The event signature would be: `Swapped(uint64,uint64)`.
Suppose that dapp emits the following log data (seen here as base64 encoded): `HMvZJQAAAAAAAAAqAAAAAAAAAGQ=`.
Suppose also that the dapp developers have declared that it follows this spec for Events, and have published the signature `Swapped(uint64,uint64)`.
We can attempt to parse this log data to see if it is one of these events, as follows. (This example is written in JavaScript.)
First, we can determine the expected 4-byte prefix by following the spec above:
```js
> { sha512_256 } = require('js-sha512')
> sig = 'Swapped(uint64,uint64)'
'Swapped(uint64,uint64)'
> hash = sha512_256(sig)
'1ccbd9254b9f2e1caf190c6530a8d435fc788b69954078ab937db9b5540d9567'
> prefix = hash.slice(0,8) // 8 nibbles = 4 bytes
'1ccbd925'
```
Next, we can inspect the data to see if it matches the expected format: 4 bytes for the prefix, 8 bytes for the first uint64, and 8 bytes for the next.
```js
> b = Buffer.from('HMvZJQAAAAAAAAAqAAAAAAAAAGQ=', 'base64')
> b.slice(0,4).toString('hex')
'1ccbd925'
> b.slice(4, 12)
> b.slice(12,20)
```
We see that the 4-byte prefix matches the signature for `Swapped(uint64,uint64)`, and that the rest of the data can be interpreted using the types declared for that signature. We interpret the above Event data to be: `Swapped(0x2a,0x64)`, meaning `Swapped(42,100)`.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
As specify in ARC-4, methods which have a `return` value MUST NOT emit an event after they log their `return` value.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Application Specification
> A specification for fully describing an Application, useful for Application clients.
## Abstract
[Section titled “Abstract”](#abstract)
> \[!NOTE] This specification will be eventually deprecated by the [`ARC-56`](https://github.com/algorandfoundation/ARCs/pull/258) specification.
An Application is partially defined by it’s [methods](/arc-standards/arc-0004) but further information about the Application should be available. Other descriptive elements of an application may include it’s State Schema, the original TEAL source programs, default method arguments, and custom data types. This specification defines the descriptive elements of an Application that should be available to clients to provide useful information for an Application Client.
## Motivation
[Section titled “Motivation”](#motivation)
As more complex Applications are created and deployed, some consistent way to specify the details of the application and how to interact with it becomes more important. A specification to allow a consistent and complete definition of an application will help developers attempting to integrate an application they’ve never worked with before.
## Specification
[Section titled “Specification”](#specification)
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC 822](https://www.ietf.org/rfc/rfc822.txt)..
### Definitions
[Section titled “Definitions”](#definitions)
* [Application Specification](#application-specification): The object containing the elements describing the Application.
* [Source Specification](#source-specification): The object containing a description of the TEAL source programs that are evaluated when this Application is called.
* [Schema Specification](#schema-specification): The object containing a description of the schema required by the Application.
* [Bare Call Specification](#bare-call-specification): The object containing a map of on completion actions to allowable calls for bare methods
* [Hints Specification](#hints-specification): The object containing a map of method signatures to meta data about each method
### Application Specification
[Section titled “Application Specification”](#application-specification)
The Application Specification is composed of a number of elements that serve to fully describe the Application.
```ts
type AppSpec = {
// embedded contract fields, see ARC-0004 for more
contract: ARC4Contract;
// the original teal source, containing annotations, base64 encoded
source?: SourceSpec;
// the schema this application requires/provides
schema?: SchemaSpec;
// supplemental information for calling bare methods
bare_call_config?: CallConfigSpec;
// supplemental information for calling ARC-0004 ABI methods
hints: HintsSpec;
// storage requirements
state?: StateSpec;
}
```
### Source Specification
[Section titled “Source Specification”](#source-specification)
Contains the source TEAL files including comments and other annotations.
```ts
// Object containing the original TEAL source files
type SourceSpec = {
// b64 encoded approval program
approval: string;
// b64 encoded clear state program
clear: string;
}
```
### Schema Specification
[Section titled “Schema Specification”](#schema-specification)
The schema of an application is critical to know prior to creation since it is immutable after create. It also helps clients of the application understand the data that is available to be queried from off chain. Individual fields can be referenced from the [default argument](#default-argument) to provide input data to a given ABI method.
While some fields are possible to know ahead of time, others may be keyed dynamically. In both cases the data type being stored MUST be known and declared ahead of time.
```ts
// The complete schema for this application
type SchemaSpec = {
local: Schema;
global: Schema;
}
// Schema fields may be declared explicitly or reserved
type Schema = {
declared: Record;
reserved: Record;
}
// Types supported for encoding/decoding
enum AVMType { uint64, bytes }
// string encoded datatype name defined in arc-4
type ABIType = string;
// Fields that have an explicit key
type DeclaredSchemaValueSpec = {
type: AVMType | ABIType;
key: string;
descr: string;
}
// Fields that have an undetermined key
type ReservedSchemaValueSpec = {
type: AVMType | ABIType;
descr: string;
max_keys: number;
}
```
### Bare call specification
[Section titled “Bare call specification”](#bare-call-specification)
Describes the supported OnComplete actions for bare calls on the contract.
```ts
// describes under what conditions an associated OnCompletion type can be used with a particular method
// NEVER: Never handle the specified on completion type
// CALL: Only handle the specified on completion type for application calls
// CREATE: Only handle the specified on completion type for application create calls
// ALL: Handle the specified on completion type for both create and normal application calls
type CallConfig = 'NEVER' | 'CALL' | 'CREATE' | 'ALL'
type CallConfigSpec = {
// lists the supported CallConfig for each on completion type, if not specified a CallConfig of NEVER is assumed
no_op?: CallConfig
opt_in?: CallConfig
close_out?: CallConfig
update_application?: CallConfig
delete_application?: CallConfig
}
```
### Hints specification
[Section titled “Hints specification”](#hints-specification)
Contains supplemental information about [ARC-0004](/arc-standards/arc-0004) ABI methods, each record represents a single method in the [ARC-0004](/arc-standards/arc-0004) contract. The record key should be the corresponding ABI signature.
NOTE: Ideally this information would be part of the [ARC-0004](/arc-standards/arc-0004) ABI specification.
```ts
type HintSpec = {
// indicates the method has no side-effects and can be call via dry-run/simulate
read_only?: bool;
// describes the structure of arguments, key represents the argument name
structs?: Record;
// describes source of default values for arguments, key represents the argument name
default_arguments?: Record;
// describes which OnCompletion types are supported
call_config: CallConfigSpec;
}
// key represents the method signature for an ABI method defined in 'contracts'
type HintsSpec = Record
```
#### Readonly Specification
[Section titled “Readonly Specification”](#readonly-specification)
Indicates the method has no side-effects and can be called via dry-run/simulate
NOTE: This property is made obsolete by [ARC-0022](/arc-standards/arc-0022) but is included as it is currently used by existing reference implementations such as Beaker
#### Struct Specification
[Section titled “Struct Specification”](#struct-specification)
Each defined type is specified as an array of `StructElement`s.
The ABI encoding is exactly as if an ABI Tuple type defined the same element types in the same order. It is important to encode the struct elements as an array since it preserves the order of fields which is critical to encoding/decoding the data properly.
```ts
// Type aliases for readability
type FieldName = string
// string encoded datatype name defined in ARC-0004
type ABIType = string
// Each field in the struct contains a name and ABI type
type StructElement = [FieldName, ABIType]
// Type aliases for readability
type ContractDefinedType = StructElement[]
type ContractDefinedTypeName = string;
// represents a input/output structure
type StructSpec = {
name: ContractDefinedTypeName
elements: ContractDefinedType
}
```
For example a `ContractDefinedType` that should provide an array of `StructElement`s
Given the PyTeal:
```py
from pyteal import abi
class Thing(abi.NamedTuple):
addr: abi.Field[abi.address]
balance: abi.Field[abi.Uint64]
```
the equivalent ABI type is `(address,uint64)` and an element in the TypeSpec is:
```js
{
// ...
"Thing":[["addr", "address"]["balance","uint64"]],
// ...
}
```
#### Default Argument
[Section titled “Default Argument”](#default-argument)
Defines how default argument values can be obtained. The `source` field defines how a default value is obtained, the `data` field contains additional information based on the `source` value.
Valid values for `source` are:
* “constant” - `data` is the value to use
* “global-state” - `data` is the global state key.
* “local-state” - `data` is the local state key
* “abi-method” - `data` is a reference to the ABI method to call. Method should be read only and return a value of the appropriate type
Two scenarios where providing default arguments can be useful:
1. Providing a default value for optional arguments
2. Providing a value for required arguments such as foreign asset or application references without requiring the client to explicitly determine these values when calling the contract
```ts
// ARC-0004 ABI method definition
type ABIMethod = {};
type DefaultArgumentSpec = {
// Where to look for the default arg value
source: "constant" | "global-state" | "local-state" | "abi-method"
// extra data to include when looking up the value
data: string | bigint | number | ABIMethod
}
```
### State Specifications
[Section titled “State Specifications”](#state-specifications)
Describes the total storage requirements for both global and local storage, this should include both declared and reserved described in SchemaSpec.
NOTE: If the Schema specification contained additional information such that the size could be calculated, then this specification would not be required.
```ts
type StateSchema = {
// how many byte slices are required
num_byte_slices: number
// how many uints are required
num_uints: number
}
type StateSpec = {
// schema specification for global storage
global: StateSchema
// schema specification for local storage
local: StateSchema
}
```
### Reference schema
[Section titled “Reference schema”](#reference-schema)
A full JSON schema for application.json can be found in [here](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0032/application.schema.json).
## Rationale
[Section titled “Rationale”](#rationale)
The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
All ARCs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ARC must explain how the author proposes to deal with these incompatibilities. ARC submissions without a sufficient backwards compatibility treatise may be rejected outright.
## Test Cases
[Section titled “Test Cases”](#test-cases)
Test cases for an implementation are mandatory for ARCs that are affecting consensus changes. If the test suite is too large to reasonably be included inline, then consider adding it as one or more files in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-####/`.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
`algokit-utils-py` and `algokit-utils-ts` both provide reference implementations for the specification structure and using the data in an `ApplicationClient` `Beaker` provides a reference implementation for creating an application.json from a smart contract.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
All ARCs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks and can be used throughout the life cycle of the proposal. E.g. include security-relevant design decisions, concerns, important discussions, implementation-specific guidance and pitfalls, an outline of threats and risks and how they are being addressed. ARC submissions missing the “Security Considerations” section will be rejected. An ARC cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# xGov Pilot - Becoming an xGov
> Explanation on how to become Expert Governors.
## Abstract
[Section titled “Abstract”](#abstract)
This ARC proposes a standard for achieving xGov status in the Algorand governance process. xGov status grants the right to vote on [ARC-34](/arc-standards/arc-0034) proposals raised by the community, specifically spending a previously specified amount of Algo in a given Term on particular initiatives.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
| Algorand xGovernor Summary | | |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - |
| Enrolment | At the start of each governance period | |
| How to become eligible | Having completed participation in the previous governance period through official or approved decentralized finance governance. | |
| Requisite | Commit of governance reward for one year | |
| Duration | 1 Year | |
| Voting Power | 1 Algo committed = 1 Vote, as per REWARDS DEPOSIT | |
| Duty | Spend all available votes each time a voting period occurs. (In case there is no proposal that aligns with an xGov's preference, a mock proposal can be used as an alternative.) | |
| Disqualification | Forfeit rewards pledged | |
### What is an xGov?
[Section titled “What is an xGov?”](#what-is-an-xgov)
xGovs, or Expert Governors, are a **self-selected** group of decentralized decision makers who demonstrate an enduring commitment to the Algorand community, possess a deep understanding of the blockchain’s inner workings and realities of the Algorand community, and whose interests are aligned with the good of the Algorand blockchain. These individuals have the ability to participate in the designation **and** approval of proposals, and play an instrumental role in shaping the future of the Algorand ecosystem.
### Requirement to become an xGov
[Section titled “Requirement to become an xGov”](#requirement-to-become-an-xgov)
To become an xGov, or Expert Governor, an account:
* **MUST** first be deemed eligible by having fully participated in the previous governance period, either through official or approved decentralized finance governance.
* At the start of each governance period, eligible participants will have the option to enrol in the xGov program
* To gain voting power as an xGov, the eligible **governor rewards for the period of the enrolment** **MUST** be committed to the xGov Term Pool and locked for a period of 12 months.
> Only the GP rewards are deposited to the xGov Term Pool. The principal algo committed remains in the gov wallet (or DeFi protocol) and can be used in subsequent Governance Periods.
Rewards deposited to the xGov Term Pool will be call **REWARDS DEPOSIT**.
### Voting Power
[Section titled “Voting Power”](#voting-power)
Voting power in the xGov process is determined by the amount of Algo an eligible participant commits. Voting power is 1 Algo = 1 Vote, as per REWARDS DEPOSIT, and it renews at the start of every quarter - provided the xGov remain eligible. This ensures that the weight of each vote is directly proportional to the level of investment and commitment to the Algorand ecosystem.
### Duty of an xGov
[Section titled “Duty of an xGov”](#duty-of-an-xgov)
As an xGov, you **MUST** actively participate in the governance process by using all available votes amongst proposals each time a voting period occurs. If you don’t do it, you will be disqualified.
> eg. For 100 Algo as per REWARDS DEPOSIT, 100 votes available, they can be spent like this:
>
> * 50 on proposal A
> * 20 on proposal B
> * 30 on proposal C
> * 0 on every other proposal
> In case no proposal aligns with an xGov’s preference, a mock proposal can be used as an alternative.
### Disqualification
[Section titled “Disqualification”](#disqualification)
As an xGov, it is important to understand the importance of your role in the governance process and the responsibilities that come with it. Failure to do so will result in disqualification. The consequences of disqualification are significant, as the xGov will lose the rewards that were committed when they entered the xGov process. It is important to take your role as an xGov seriously and fulfill your responsibilities to ensure the success of the governance process.
> The rewards will remain in the xGov reward pools & will be distributed among remaining xGovs
## Rationale
[Section titled “Rationale”](#rationale)
This proposal provides a clear and simple method for participation in xGov process, while also providing incentives for long-term commitment to the network. Separate pools for xGov and Gov allow for a more diverse range of participation, with the xGov pool providing an additional incentive for longer-term commitment. The requirement to spend 100% of your vote on proposals will ensure that participants are actively engaged in the decision-making process.
After weeks of engagement with the community, it has been decided:
* That the xGov process will not utilize token or NFT.
* There will be no minimum or maximum amount of Algo required to participate in the xGov process
* In the future, the possibility of node operation being considered as a form of participation eligibility is being explored This approach aims to make the xGov process accessible and inclusive for all members of the community.
We encourage the community to continue to provide input on this topic through the submission of questions and ideas in this ARC document.
> **Important**: The xGov program is still a work in progress, and changes are expected to happen over the next few years with community input and design consultation. Criteria to ENTER the program will only be applied forward, which means Term Pools already in place will not be affected by new any NEW ENTRY criteria. However, other ELIGIBILITY criteria could be added and be applied to all pools. For example, if the majority of the community deems necessary to have more than 1 voting session per quarter, this type of change could be applied to all Term pools, given ample notice and time for preparation.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
No funds need to leave the user’s wallet in order to become an xGov.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# xGov Pilot - Proposal Process
> Criteria for the creation of proposals.
## Abstract
[Section titled “Abstract”](#abstract)
The Goal of this ARC is to clearly define the steps involved in submitting proposals for the xGov Program, to increase transparency and efficiency, ensuring all proposals are given proper consideration. The goal of this grants scheme is to fund proposals that will help us in our goal of increasing the adoption of the Algorand network, as the most advanced layer 1 blockchain to date. The program aims to fund proposals to develop open source software, including tooling, as well as educational resources to help inform and grow the Algorand community.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### What is a proposal
[Section titled “What is a proposal”](#what-is-a-proposal)
The xGov program aims to provide funding for individuals or teams to:
* Develop of open source applications and tools (eg. an open source AMM or contributing content to an Algorand software library).
* Develop Algorand education resources, preferably in languages where the resources are not yet available(eg. a video series teaching developers about Algorand in Portuguese or Indonesian).
The remainder of the xGov program pilot will not fund proposals for:
* Supplying liquidity.
* Reserving funds to pay for ad-hoc open-source development (devs can apply directly for an xGov grant).
* Buying ASAs, including NFTs.
Proposals **SHALL NOT** be divided in small chunks.
> Issues requiring resolution may have been discussed on various online platforms such as forums, discord, and social media networks. Proposals requesting a large amount of funds **MUST BE** split into a milestone-based plan. See [Submit a proposal](/arc-standards/arc-0034#submit-a-proposal)
### Duty of a proposer
[Section titled “Duty of a proposer”](#duty-of-a-proposer)
Having the ability to propose measures for a vote is a significant privilege, which requires:
* A thorough understanding of the needs of the community.
* Alignment of personal interests with the advancement of the Algorand ecosystem.
* Promoting good behavior amongst proposers and discouraging “gaming the system”.
* Reporting flaws and discussing possible solutions with the AF team and community using either the Algorand Forum or the xGov Discord channels.
### Life of a proposal
[Section titled “Life of a proposal”](#life-of-a-proposal)
The proposal process will follow the steps below:
* Anyone can submit a proposal at any time.
* Proposals will be evaluated and refined by the community and xGovs before they are available for voting.
* Up to one month is allocated for voting on proposals.
* The community will vote on proposals that have passed the refinement and temperature check stage.
> If too many proposals are received in a short period of time. xGovs can elect to close proposals, in order to be able to handle the volume appropriately.
### Submit a proposal
[Section titled “Submit a proposal”](#submit-a-proposal)
In order to submit a proposal, a proposer needs to create a pull request on the following repository: [xGov Proposals](https://github.com/algorandfoundation/xGov).
Proposals **MUST**:
* Be posted on the [Algorand Forum](https://forum.algorand.org/) (using tags: Governance and xGov Proposals) and discussed with the community during the review phased. Proposals without a discussion thread WILL NOT be included in the voting session.
* Follow the [template form provided](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0034/TemplateForm.md), filling all the template sections
* Follow the rules of the xGov Proposals Repository.
* The minimum requested Amount is 10000 Algo
* Have the status `Final` before the end of the temperature check.
* Be either Proactive (the content of the proposal is yet to be created) or Retroactive (the content of the proposal is already created)
* Milestone-based grants must submit a proposal for one milestone at a time.
* Milestones need to follow the governance periods cycle. With the current 3-months cycle, a milestone could be 3-months, 6 months, 9 months etc.
* The proposal must display all milestones with clear deliverables and the amount requested must match the 1st milestone. If a second milestone proposal is submitted, it must display the first completed milestone, linking all deliverables. If a third milestone proposal is submitted, it must display the first and second completed milestone, linking all deliverables. This repeats until all milestones are completed.
* Funding will only be disbursed upon the completion of deliverables.
* A proposal must specify how its delivery can be verified, so that it can be checked prior to payment.
* Proposals must include clear, non-technical descriptions of deliverables. We encourage the use of multimedia (blog/video) to help explain your proposal’s benefits to the community.
* Contain the maintenance period, availability, and sustainability plans. This includes information on potential costs and the duration for which services will be offered at no or reduced cost.
Proposals **MUST NOT**:
* Request funds for marketing campaigns or organizing future meetups.
> Each entity, individual, or project can submit at most two proposals (one proactive proposal and one retroactive proposal). Attempts to circumvent this rule may lead to disqualification or denial of funds.
### Disclaimer jurisdictions and exclusions
[Section titled “Disclaimer jurisdictions and exclusions”](#disclaimer-jurisdictions-and-exclusions)
To be eligible to apply for a grant, projects must abide by the [Disclaimers](https://www.algorand.foundation/disclaimers) (in particular the “Excluded Jurisdictions” section) and be willing to enter into [a binding contract with the Algorand Foundation](https://drive.google.com/file/d/1dsKwQGhnS3h_PrSkoidhnvqlX7soLpZ-/view). Additionally, applications promoting gambling, adult content, drug use, and violence of any kind are not permitted.
> We are currently accepting grant applications from US-based individual/business. If the grant is approved, Algos will be converted to USDCa upon payment. This exception will be reviewed periodically.
### Voting Power
[Section titled “Voting Power”](#voting-power)
When an account participates in its first session, the voting power assigned to it will be equivalent to the total governance rewards it would have received. For all following sessions, the account’s voting power will adjust based on the rewards lost by members in their pool who did not meet their obligations.
The voting power for an upcoming session is computed as: `new_account_voting_power = (initial_pool_voting_power * initial_account_voting_power) / pool_voting_power_used`
Where:
* `new_account_voting_power`: Voting power allocated to an account for the next session.
* `initial_account_voting_power`: The voting power originally assigned to an account, based on the governance rewards.
* `initial_pool_voting_power`: The total voting power of the pool during its initial phase. This is the sum of governance rewards for all pool participants.
* `pool_voting_power_used`: The voting power from the pool that was actually used in the last session.
### Proposal Approval Threshold
[Section titled “Proposal Approval Threshold”](#proposal-approval-threshold)
In order for a proposal to be approved, it is necessary for the number of votes in favor of the proposal to be proportionate to the amount of funds requested. This ensures that the allocation of funds is in line with the community’s consensus and in accordance with democratic principles.
The formula to calculate the voting power needed to pass a proposal is as follows: `voting_power_needed = (amount_requested) / (amount_available) * (current_session_voting_power_used)`
Where:
* `voting_power_needed`: Voting power required for a proposal to be accepted.
* `amount_requested`: The requested amount a proposal is seeking.
* `amount_available`: The entire grant funds available for the current session.
* `current_session_voting_power_used`: The voting power used in the current session.
> eg. 2 000 000 Algo are available to be given away as grants, 300 000 000 Algo are committed to the xGov Process, 200 000 000 Algo are used during the vote:
>
> * Proposal A request 100 000 Algos (5 % of the Amount available)
> * Proposal A needs 5 % of the used votes (10 000 000 Votes) to go through
### Voting on proposal
[Section titled “Voting on proposal”](#voting-on-proposal)
At the start of the voting period xGovs [ARC-33](/arc-standards/arc-0033) will vote on proposals using the voting tool hosted at [](https://xgov.algorand.foundation/).
Vote will refer to the PR number and a cid hash of the proposal itself.
The CID MUST:
* Represent the file.
* Be a version V1 CID
* E.g., use the option —cid-version=1 of ipfs add
* Use SHA-256 hash algorithm
* E.g., use the option —hash=sha2-256 of ipfs add
### Grants calculation
[Section titled “Grants calculation”](#grants-calculation)
The allocation of grants will consider the funding request amounts and the available amount of ALGO to be distributed.
### Grants contract & payment
[Section titled “Grants contract & payment”](#grants-contract--payment)
* Once grants are approved, the Algorand Foundation team will handle the applicable contract and payment.
* **Before submitting your grant proposal**, review the contract template and ensure you’re comfortable with its terms: [Contract Template](https://drive.google.com/file/d/1dsKwQGhnS3h_PrSkoidhnvqlX7soLpZ-/view).
> For milestone-based grants, please also refer to the [Submit a proposal section](/arc-standards/arc-0034#submit-a-proposal)
## Rationale
[Section titled “Rationale”](#rationale)
The current status of the proposal process includes the following elements:
* Proposals will be submitted off-chain and linked to the on-chain voting through a hash.
* Projects that require multiple funding rounds will need to submit separate proposals.
* The allocation of funds will be subject to review and adjustment during each governance period.
* Voting on proposals will take place on-chain.
We encourage the community to continue to provide input on this topic through the submission of questions and ideas in this ARC document.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Offline Wallet Backup Protocol
> Wallet-agnostic backup protocol for multiple accounts
## Abstract
[Section titled “Abstract”](#abstract)
This document outlines the high-level requirements for a wallet-agnostic backup protocol that can be used across all wallets on the Algorand ecosystem.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Requirements
[Section titled “Requirements”](#requirements)
At a high-level, offline wallet backup protocol has the following requirements:
* Wallet applications should allow backing up and storing multiple accounts at the same time. Account information should be encrypted with a user-defined secret key, utilizing NaCl SecretBox method (audited and endorsed by Algorand).
* Encrypted final string should be easily copyable to be stored digitally. When importing, wallet applications should be able to detect already imported accounts and gracefully ignore them.
### Format
[Section titled “Format”](#format)
Before encryption, account information should be converted to the following JSON format:
```plaintext
{
"device_id": "UNIQUE IDENTIFIER FOR DEVICE (OPTIONAL)",
"provider_name": "PROVIDER NAME (OPTIONAL, i.e. Pera Wallet)",
"accounts": [
{
"address": "ACCOUNT PUBLIC ADDRESS (REQUIRED)",
"name": "USER DEFINED ACCOUNT NAME (OPTIONAL)",
"account_type": "TYPE OF ACCOUNT: single, multisig, watch, contact, ledger (REQUIRED)",
"private_key": "PRIVATE KEY AS BASE64 ENCODING OF 64 BYTE ALGORAND PRIVATE KEY as encoded by algosdk (NOT PASSPHRASE, REQUIRED for user-owned accounts, can be omitted in case of watch, contact, multisig, ledger accounts)",
"metadata": "ANY ADDITIONAL CONTENT (OPTIONAL)",
"multisig": "Multisig information (only required if the account_type is multisig)",
"ledger": {
"device_id": "device id",
"index": ,
"connection_type": "bluetooth|usb"
},
},
...
]
}
```
*Clients must accept additional fields in the JSON document.*
Here is an example with a single account:
```plaintext
{
"device_id": "2498232091970170817",
"provider_name": "Pera Wallet",
"accounts": [
{
"address": "ELWRE6HZ7KIUT46EQ6PBISGD3ND6QSCBVWICYR2QR2Y7LOBRZRCAIKLWDE",
"name": "My NFT Account",
"account_type": "single",
"private_key": "w0HG2VH7tAYz9PD4SYX0flC4CKh1OONCB6U5bP7cXGci7RJ4+fqRSfPEh54USMPbR+hIQa2QLEdQjrH1uDHMRA=="
}
],
}
```
Here is an example with a single multi-sig account:
```plaintext
{
"device_id": "2498232091970170817",
"provider_name": "Pera Wallet",
"accounts": [
{
"address": "ELWRE6HZ7KIUT46EQ6PBISGD3ND6QSCBVWICYR2QR2Y7LOBRZRCAIKLWDE",
"name": "Our Multisig Account",
"account_type": "multisig",
"multisig": {
version: 1,
threshold: 2,
addrs: [
account1.addr,
account2.addr,
account3.addr,
],
},
}
],
}
```
### Encryption
[Section titled “Encryption”](#encryption)
Once the input JSON is ready, as specified above, it needs to be encrypted. Even if it is assumed that the user is going to store this information in a secure location, copy-pasting it without encryption is not secure since multiple applications can access the clipboard.
The information needs to be encrypted using a very long passphrase. 12 words mnemonic will be used as the key. 12-word mnemonic is secure and it will not create confusion with the 25-word mnemonics that are used to encrypt accounts.
The wallet applications should not allow users to copy the 12-word mnemonic nor allow taking screenshots. The users should note it visually.
The encryption should be made as follows:
1. The wallet generates a random 16-byte string S (using a cryptographically secure random number generator)
2. The wallet derives a 32-byte key: `key = HMAC-SHA256(key="Algorand export 1.0", input=S)` On libsodium, use , `crypto_auth_hmacsha256_init` / `crypto_auth_hmacsha256_update` / `crypto_auth_hmacsha256_final`
3. The wallet encrypts the input JSON using `crypto_secretbox_easy` from libsodium ()
4. The wallet outputs the following output JSON:
```plaintext
{
"version": "1.0",
"suite": "HMAC-SHA256:sodium_secretbox_easy",
"ciphertext":
}
```
This JSON document (will be referred as ciphertext envelope JSON) needs to be encoded with base64 again in order to make it easier to copy-paste & store.
5. S is encoded as a 12-word mnemonic (according to BIP-39) and displayed to the user.
The user will be responsible for keeping the 12-word mnemonic and the base64 output of the ciphertext envelope JSON in safe locations. Note that step 5 is the default approach, however, the wallets can support other methods other than mnemonics as well, as long as they are secure.
### Importing
[Section titled “Importing”](#importing)
When importing, wallet applications should ask the user for the base64 output of the envelope JSON and the 12-word mnemonic. After getting these values, it should attempt to decrypt the encrypted string using the 12-word mnemonic. On successful decryption, accounts that can be imported can be processed.
## Rationale
[Section titled “Rationale”](#rationale)
There are many benefits to having an openly documented format:
* Better interoperability across wallets, allowing users to use multiple wallets easily by importing all of their accounts using a single format.
* Easy and secure backup of all wallet data at a user-defined location, including secure storage in digital environments.
* Ability to transfer data from device to device securely, such as when moving data from one mobile device to another.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Tbd
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Convention for declaring filters of an NFT
> This is a convention for declaring filters in an NFT metadata
## Abstract
[Section titled “Abstract”](#abstract)
The goal is to establish a standard for how filters are declared inside a non-fungible (NFT) metadata.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative.
If the property `filters` is provided anywhere in the metadata of an nft, it **MUST** adhere to the schema below. If the nft is a part of a larger collection and that collection has filters, all the available filters for the collection **MUST** be listed as a property of the `filters` object. If the nft does not have a particular filter, it’s value **MUST** be “none”.
The JSON schema for `filters` is as follows:
```json
{
"title": "Filters for Non-Fungible Token",
"type": "object",
"properties": {
"filters": {
"type": "object",
"description": "Filters can be used to filter nfts of a collection. Values must be an array of strings or numbers."
}
}
}
```
#### Examples
[Section titled “Examples”](#examples)
##### Example of an NFT that has traits & filters
[Section titled “Example of an NFT that has traits & filters”](#example-of-an-nft-that-has-traits--filters)
```json
{
"name": "NFT With Traits & filters",
"description": "NFT with traits & filters",
"image": "https://s3.amazonaws.com/your-bucket/images/two.png",
"image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
"properties": {
"creator": "Tim Smith",
"created_at": "January 2, 2022",
"traits": {
"background": "yellow",
"head": "curly"
},
"filters": {
"xp": 120,
"state": "REM"
}
}
}
```
## Rationale
[Section titled “Rationale”](#rationale)
A standard for filters is needed so programs know what to expect in order to filter things without using rarity.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
If `filters` wants to be added on top of fields [ARC-16](/arc-standards/arc-0016) `traits` and `filters` should be inside the `properties` object. (eg: [Example above](/arc-standards/arc-0036#example-of-an-nft-that-has-traits--filters))
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# xGov Pilot - Integration
> Integration of xGov Process
## Abstract
[Section titled “Abstract”](#abstract)
This ARC aims to explain how the xGov process can be integrated within dApps.
## Motivation
[Section titled “Motivation”](#motivation)
By leveraging the xGov decentralization, it can improve the overall efficiency of this initiative.
## Specification
[Section titled “Specification”](#specification)
The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### How to register
[Section titled “How to register”](#how-to-register)
#### How to find the xGov Escrow address
[Section titled “How to find the xGov Escrow address”](#how-to-find-the-xgov-escrow-address)
The xGov Escrow address can be extracted using this endpoint: `https://governance.algorand.foundation/api/periods/active/`.
```json
{
...
"xgov_escrow_address": "string",
...
}
```
#### Registration
[Section titled “Registration”](#registration)
Governors should specify the xGov-related fields. Specifically, governors can sign up to be xGovs by designating as beneficiaries the xGov escrow address (that changes from one governance period to the next). They can also designate an xGov-controller address that would participate on their behalf in xGov votes via the optional parameter “xGv”:“aaa”. Namely, the Notes field has the form.
af/gov1:j{“com”:nnn,“mmm1”:nnn1,“mmm2”:nnn2,“bnf”:“XYZ”,“xGv”:“ABC”} Where:
“com”:nnn is the Algo commitment; “mmm”:nnn is a commitment for LP-token with asset-ID mmm; “bnf”:“XYZ” designates the address “XYZ” as the recipient of rewards (“XYZ” must equal the xGov escrow in order to sign up as an xGov); The optional “xGv”:“ABC” designates address “ABC” as the xGov-controller of this xGov account.
#### Goal example
[Section titled “Goal example”](#goal-example)
goal clerk send -a 0 -f ALDJ4R2L2PNDGQFSP4LZY4HATIFKZVOKTBKHDGI2PKAFZJSWC4L3UY5HN4 -t RFKCBRTPO76KTY7KSJ3HVWCH5HLBPNBHQYDC52QH3VRS2KIM7N56AS44M4 -n
‘af/gov1:j{“com”:1000000,“12345”:2,“67890”:30,“bnf”:“DRWUX3L5EW7NAYCFL3NWGDXX4YC6Y6NR2XVYIC6UNOZUUU2ERQEAJHOH4M”,“xGv”:“ALDJ4R2L2PNDGQFSP4LZY4HATIFKZVOKTBKHDGI2PKAFZJSWC4L3UY5HN4”}’
### How to Interact with the Voting Application
[Section titled “How to Interact with the Voting Application”](#how-to-interact-with-the-voting-application)
#### How to get the Application ID
[Section titled “How to get the Application ID”](#how-to-get-the-application-id)
Every vote will be a different ID, but search for all apps created by the used account and look at the global state to see if is\_bootstrapped is 1.
#### ABI
[Section titled “ABI”](#abi)
The ABI is available [here ](https://github.com/algorandfoundation/nft_voting_tool/blob/main/src/algorand/smart_contracts/artifacts/VotingRoundApp/contract.json). A working test example of how to call application’s method is here:
## Rationale
[Section titled “Rationale”](#rationale)
This integration will improve the usage of the process.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
None
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Logic Signature Templates
> Defining templated logic signatures so wallets can safely sign them.
## Abstract
[Section titled “Abstract”](#abstract)
This standard allows wallets to sign known logic signatures and clearly tell the user what they are signing.
## Motivation
[Section titled “Motivation”](#motivation)
Currently, most Algorand wallets do not enable the signing of logic signature programs for the purpose of delegation. The rationale is to prevent users from signing malicious programs, but this limitation also prevents non-malicious delegated logic signatures from being used in the Algorand ecosystem. As such, there needs to be a way to provide a safe way for wallets to sign logic signatures without putting users at risk.
## Specification
[Section titled “Specification”](#specification)
A logic signature **MUST** be described via the following JSON interface(s):
### Interface
[Section titled “Interface”](#interface)
```typescript
interface LogicSignatureDescription {
name: string,
description: string,
program: string,
variables: {
variable: string,
name: string,
type: string,
description: string
}[]
}
```
| Key | Description |
| ----------------------- | ------------------------------------------------------------------------- |
| `name` | The name of the logic signature. **SHOULD** be short and descriptive |
| `description` | A description of what the logic signature does |
| `program` | base64 encoding of the TEAL program source |
| `variables` | An array of variables in the program |
| `variables.variable` | The name of the variable in the templated program. |
| `variables.name` | Human-friendly name for the variable. **SHOULD** be short and descriptive |
| `variables.type` | **MUST** be a type defined below in the `type` section |
| `variables.description` | A description of how this variable is used in the program |
### Variables
[Section titled “Variables”](#variables)
A variable in the program **MUST** be start with `TMPL_`
#### Types
[Section titled “Types”](#types)
All non-reference ABI types **MUST** be supported by the client. ABI values **MUST** be encoded in base16 (with the leading `0x`) with the following exceptions:
| Type | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `address` | 58-character base32 Algorand public address. Typically to be used as an argument to the `addr` opcode. Front-ends **SHOULD** provide a link to the address on an explorer |
| `application` | Application ID. Alias for `uint64`. Front-ends **SHOULD** provide a link to the app on an explorer |
| `asset` | Asset ID. Alias for `uint64`. Front-ends **SHOULD** provide a link to the asset on an explorer |
| `string` | UTF-8 string. Typically used as an argument to `byte`, `method`, or a branching opcode. |
| `hex` | base16 encoding of binary data. Typically used as an argument to `byte`. **MUST** be prefixed with `0x` |
For all other value, front-ends **MUST** decode the ABI value to display the human-readable value to the user.
### Input Validation
[Section titled “Input Validation”](#input-validation)
All ABI values **MUST** be encoded as base16 and prefixed with `0x`, with the exception of `uint64` which should be provided as an integer.
String values **MUST NOT** include any unescaped `"` to ensure there is no TEAL injection.
All values **MUST** be validated to ensure they are encoded properly. This includes the following checks:
* An `address` value must be a valid Algorand address
* A `uint64`, `application`, or `asset` value must be a valid unsigned 64-bit integer
### Unique Identification
[Section titled “Unique Identification”](#unique-identification)
To enable unique identification of a description, clients **MUST** calculate the SHA256 hash of the JSON description canonicalized in accordance with [RFC 8785](https://www.rfc-editor.org/rfc/rfc8785).
### WalletConnect Method
[Section titled “WalletConnect Method”](#walletconnect-method)
For wallets to support this ARC, they need to support the a `algo_templatedLsig` method.
The method expects three parameters described by the interface below
```ts
interface TemplatedLsigParams {
/** The canoncalized ARC47 templated lsig JSON as described in this ARC */
arc47: string
/** The values of the templated variables, if there are any */
values?: {[variable: string]: string | number}
/** The hash of the expected program. Wallets should compile the lsig with the given values to verify the program hash matches */
hash: string
}
```
## Rationale
[Section titled “Rationale”](#rationale)
This provides a way for frontends to clearly display to the user what is being signed when signing a logic signature.
Template variables must be immediate arguments. Otherwise a string variable could specify the opcode in the program, which could have unintended and unclear consequences.
`TMPL_` prefix is used to align with existing template variable tooling.
Hashing canonicalized JSON is useful for ensuring clients, such as wallets, can create a allowlist of templated logic signatures.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
N/A
## Test Cases
[Section titled “Test Cases”](#test-cases)
N/A
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
A reference implementation can be found in the`https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047` folder.
[lsig.teal](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047/lsig.teal) contains the templated TEAL code for a logic signature that allows payments of a specific amount every 25,000 blocks.
[dapp.ts](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047/dapp.ts) contains a TypeScript script showcasing how a dapp would form a wallet connect request for a templated logic signature.
[wallet.ts](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047/wallet.ts) contains a TypeScript script showcasing how a wallet would handle a request for signing a templated logic signature.
[validate.ts](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047/validate.ts) contains a TypeScript script showcasing how one could validate templated TEAL and variable values.
### String Variables
[Section titled “String Variables”](#string-variables)
#### Invalid: Partial Argument
[Section titled “Invalid: Partial Argument”](#invalid-partial-argument)
```plaintext
#pragma version 9
byte "Hello, TMPL_NAME"
```
This is not valid because `TMPL_NAME` is not the full immediate argument.
#### Invalid: Not An Argument
[Section titled “Invalid: Not An Argument”](#invalid-not-an-argument)
```plaintext
#pragma version 9
TMPL_PUSH_HELLO_NAME
```
This is not valid because `TMPL_PUSH_HELLO_NAME` is not an immediate argument to an opcode.
#### Valid
[Section titled “Valid”](#valid)
```plaintext
#pragma version 9
byte TMPL_HELLO_NAME
```
This is valid as `TMPL_HELLO_NAME` is the entire immediate argument of the `byte` opcode. A possible value could be `Hello, AlgoDev`
### Hex Variables
[Section titled “Hex Variables”](#hex-variables)
#### Valid
[Section titled “Valid”](#valid-1)
```plaintext
#pragma version 9
byte TMPL_DEAD_BEEF
```
This is valid as `TMPL_DEAD_BEEF` is the full immediate argument to the `byte` opcode. A possible value could be `0xdeadbeef`.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
It should be made clear that this standard alone does not define how frontends, particularly wallets, should deem a logic signature to be safe. This is a decision made solely by the front-ends as to which logic signatures they allow to be signed. It is **RECOMMENDED** to only support the signing of audited or otherwise trusted logic signatures.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Targeted DeFi Rewards
> Targeted DeFi Rewards, Terms and Conditions
## Abstract
[Section titled “Abstract”](#abstract)
The Targeted DeFi Rewards is a temporary incentive program that distributes Algo to be deployed in targeted activities to attract new DeFi users from within and outside the ecosystem. The goal is to give DeFi projects more flexibility in how these rewards are structured and distributed among their user base, targeting rapid growth, deeper DEX liquidity, and incentives for users who come to Algorand in the middle of a governance period.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Eligibility Criteria
[Section titled “Eligibility Criteria”](#eligibility-criteria)
To be eligible to apply to this program, projects must abide by the [Disclaimers](https://www.algorand.foundation/disclaimers) (in particular the “Excluded Jurisdictions” section) and be willing to enter into a binding contract in the form of the template provided by the Algorand Foundation.
> The Algorand Foundation is temporarily allowing US-based entities to apply for this program. Approved projects will have their rewards swapped to USDCa on the day of the payment. This exception will be reviewed periodically.
Projects must have at least 500K Algo equivalent in TVL of white-listed assets, at the time of the quarterly snapshot block, which happens on the 15th day of the last month of each calendar quarter. All related wallet addresses will be provided in advance for peer scrutiny.
The DeFi Advisory Committee will review applications to verify each TVL claim, thus ensuring that claims are valid prior to application approval.
For AMMs we will leverage the Eligible Liquidity Pool list that is currently adopted to allow the governors commitment of LP tokens in the DeFi Rewards program, with extension to the assets defined below.
For Lending/Borrowing protocols, each project will provide a list of their assets and their holding wallet address(es).
For Bridges, each project will provide a list of the bridged assets and their holding wallet address(es).
### Assets Selection
[Section titled “Assets Selection”](#assets-selection)
The metrics used to select eligible assets to be used for Eligibility TVL Calculation (as per Eligibility Criteria above) were chosen to ensure that the selected tokens have a strong reputation, are difficult to manipulate, and are valuable to the ecosystem. This reputation is built on a combination of factors, including Total Value Locked (TVL), Market Cap, and listings.
> Assets are expected to meet at least two of the three criteria below to be included in the white-list.
| Criteria | |
| :--------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| TVL | The total value locked in different Algorand protocols plays a key role. It’s a good indicator of the token’s popularity. Minimum TVL requirement: $100K across all the protocols. |
| Market Cap | Market cap is a measure of a crypto token’s total circulating supply multiplied by its current market price. This parameter can be used to consider the positioning of the tokens on the entire crypto market. Minimum Market Cap requirement: USD 1MM. |
| Listing | Tokens listed on multiple stable and respected exchanges are often seen as more established and trustworthy. This can also contribute to increased demand for the token and further the growth of its reputation within the ecosystem. |
The following assets are qualified and meet the above criteria:
* ALGO
* gALGO - ASA ID 793124631
* USDC - ASA ID 31566704
* USDT - ASA ID 312769
* goBTC - ASA ID 386192725
* goETH - ASA ID 386195940
* PLANETS - ASA ID 27165954
* OPUL - ASA ID 287867876
* VESTIGE - ASA ID 700965019
* CHIPS - ASA ID 388592191
* DEFLY - ASA ID 470842789
* goUSD - ASA 672913181
* WBTC - ASA 1058926737
* WETH - ASA 887406851
* GOLD$ - ASA 246516580
* SILVER$ - ASA 246519683
* PEPE - ASA 1096015467
* COOP - ASA 796425061
* GORA - ASA 1138500612
> Applications for the above list can be submitted at any time [using this form](https://forms.gle/kpEpZ8sih69M5xa39). Cut off for the applications review is the 7th day of the last month of each calendar quarter, or one week before the quarterly snapshot date.
### Rewards Distribution
[Section titled “Rewards Distribution”](#rewards-distribution)
Projects will receive 11250 Algo for each 500K Algo TVL as defined above, rounded down. In the event that the available Algo are not sufficient for all the projects, Algo rewards will be distributed to each protocol based on their weighted contribution of TVL to Algorand DeFi.
Rewards per project are capped at 25% of the total rewards distributed under this program for that period. In the event of partial distribution of the allocated 7.5MM, the remaining funds will be distributed as regular DeFi governance rewards. For Governance Period 8, the AMM TVL count has doubled, when compared to lending/borrow and bridge projects, in recognition of their strategic role in providing liquidity for the ecosystem. This modification was approved by the DeFi Committee.
Rewards under this program will be distributed to projects within 4 weeks of the scheduled start date of the new governance period and the project(s). The usage of these rewards will be made public, and they will be entirely dedicated to protocol provision, user rewards, and user engagement. The use of rewards and methodology for payment must be made public and approved by the Algorand DeFi advisory committee prior to distribution.
## Rationale
[Section titled “Rationale”](#rationale)
This document was versioned using google doc, it made more sense to move it on github.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Disclaimer: This document may be revised until the day before the voting session opens, as we are still collecting community feedback.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# NFT Rewards
> NFT Rewards, Terms and Conditions
## Abstract
[Section titled “Abstract”](#abstract)
The NFT Rewards is a temporary incentive program that distributes ALGO to be deployed in targeted activities to attract new NFT users from within and outside the ecosystem.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Pilot program qualification for NFT marketplaces
[Section titled “Pilot program qualification for NFT marketplaces”](#pilot-program-qualification-for-nft-marketplaces)
To be eligible to apply to this program, projects must abide by the [Disclaimers](https://www.algorand.foundation/disclaimers) (in particular the “Excluded Jurisdictions” section) and be willing to enter into a binding contract in the form of the template provided by the Algorand Foundation.
NFT marketplaces applying for this program:
* Must be an NFT marketplace on Algorand that coordinates the selling of NFTs. An NFT marketplace is defined as an online platform that facilitates third-party non-fungible token listings and transactions in ALGO on the Algorand blockchain.
* Must have transaction volume (over the previous 6 months leading up to the application for the program) that is equivalent to at least 10% of total rewards being distributed. For example, if the total rewards amount is 500K ALGO, then the minimum volume must be 50K ALGO.P
#### Important Note
[Section titled “Important Note”](#important-note)
*NFT Rewards Program for US entities:*
> For 2024 | Q2 we will be allowing US-based entities that fit the Program Criteria to apply for the NFT Rewards program. Their allocated ALGO will be converted to USDCa post prior to the payment transfer. This change will be reviewed on a periodic basis.
### Allocation of rewards
[Section titled “Allocation of rewards”](#allocation-of-rewards)
* Rewards will be allocated proportionally based on volume for each qualified NFT marketplace.
* For qualifying marketplaces with more than 50% of total NFT marketplace volume, rewards will be capped at 35%.
### Requirements for initiatives
[Section titled “Requirements for initiatives”](#requirements-for-initiatives)
1. The rewards (ALGO) must ultimately go to NFT collectors/end users and creators.
2. NFT marketplaces must share their campaign plans publicly in advance in order to qualify for the rewards.
3. The rewards (ALGO) should be held in a separate wallet from operating funds to track on-chain transactions of how funds are being spent.
4. The NFT marketplace must make public data that shows its trading volume in the last quarter.
5. Proposals that incentivize wash trading\* will not be approved to participate in the Program.
6. NFT marketplaces must reward creators whose NFTs are purchased with a 5% minimum royalty.
> * By definition, the term “wash trading” means a form of market manipulation where the same user simultaneously buys and sells the same asset with the intention of giving false or misleading signals about its demand or price
### Process for launching initiative
[Section titled “Process for launching initiative”](#process-for-launching-initiative)
* To apply, a qualifying NFT marketplace must provide detailed information on the specifics of initiatives they are planning in that period, as well as any documentation proving the location of its headquarters.
* If approved by the Algorand Foundation team, rewards will be distributed proportionally based on the allocation defined above.
* The qualifying NFT marketplaces must provide a detailed 1-page report following the initiative to Algorand Foundation and on the Forum:
1. Summary of the initiatives implemented;
2. Amount of rewards paid out (including any unspent rewards, which must be returned), and wallet addresses;
3. Total volume of transactions directly as a result of the campaign;
4. New wallets interacting with the marketplace;
5. Total volume of transactions compared to the previous quarter;
6. Any other relevant information.
### Evaluation
[Section titled “Evaluation”](#evaluation)
From GP10 (Q1/2024) proposals will be added to the governance portal and approved or rejected directly by the community. A proposal passes when it reaches a majority of “Yes” votes. The proposals and results are available at [governance.algorand.foundation](https://governance.algorand.foundation).
NFT marketplaces that do not fulfill their campaign plan cannot apply for further incentives.
NFT team will review overall results and discuss whether this program is having the desired impact and, together with the community, will help evaluate whether it should be extended and expanded to the next period.
### Important to note
[Section titled “Important to note”](#important-to-note)
* Marketplaces that fit the above criteria will be required to sign a legal contract with the Algorand Foundation.
* Rewards are only paid out in ALGO or USDCa for US-based entities..
* Legal entities based in other jurisdictions where receiving ALGO is not allowed are not able to partake in this program.
* Participants and the Algorand Foundation will all agree on the source of data and metrics to be used for calculating the allocation and measuring the results.
## Rationale
[Section titled “Rationale”](#rationale)
This document was versioned using google doc, it made more sense to move it on github.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Disclaimer: This document may be revised until the day before the voting session opens, as we are still collecting community feedback.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Metadata Declarations
> A specification for a decentralized, Self-declared, & Verifiable Tokens, Collections, & Metadata
## Abstract
[Section titled “Abstract”](#abstract)
This ARC describes a standard for a self-sovereign on-chain project & info declaration. The declaration is an ipfs link to a JSON document attached to a smart contract with multi-wallet verification capabilities that contains information about a project, including project tokens, FAQ, NFT collections, team members, and more.
## Motivation
[Section titled “Motivation”](#motivation)
In our current ecosystem we have a number of centralized implementations for parts of these vital pieces of information to be communicated to other relevant parties. All NFT marketplaces implement their own collection listing systems & requirements. Block explorers all take different approaches to sourcing images for ASA’s; The most common being a github repository that the Tinyman team controls & maintains. This ARC aims to standardize the way that projects communicate this information to other parts of our ecosystem.
We can use a smart contract with multi-wallet verification to store this information in a decentralized, self-sovereign & verifiable way by using custom field metadata & IPFS. A chain parser can be used to read the information stored & verify the details against the verified wallets attached to the contract.
## Specification
[Section titled “Specification”](#specification)
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
This proposal specifies an associated off-chain JSON metadata file, displayed below. This metadata file contains many separate sections and escape hatches to include unique metadata about various businesses & projects. For the purposes of requiring as few files & ipfs uploads as possible the sections are all included within the same file. The file is then added to IPFS and the link saved in a custom field on the smart contract under the key `project`.
| Field | Schema | Description | Required |
| ----------- | ------------------ | ---------------------------------------------------------------------------------- | -------- |
| version | string | The version of the standard that the metadata is following. | true |
| associates | array\ | An array of objects that represent the associates of the project. | false |
| collections | array\ | An array of objects that represent the collections of the project. | false |
| tokens | array\ | An array of objects that represent the tokens of the project. | false |
| faq | array\ | An array of objects that represent the FAQ of the project. | false |
| extras | object | An object that represents any extra information that the project wants to include. | false |
##### Top Level JSON Example
[Section titled “Top Level JSON Example”](#top-level-json-example)
```json
{
"version": "0.0.2",
"associates": [...],
"collections": [...],
"tokens": [...],
"faq": [...],
"extras": {...}
}
```
### Version
[Section titled “Version”](#version)
We envision this is an evolving / living standard that allows the community to add new sections & metadata as needed. The version field will be used to determine which version of the standard the metadata is following. This will allow for backwards compatibility & future proofing as the standard changes & grows. At the top level, `version` is the only required field.
### Associates
[Section titled “Associates”](#associates)
Associates are a list of wallets & roles that are associated with the project. This can be used to display the team members of a project, or the owners of a collection.
The associates field is an array of objects that contain the following fields:
| Field | Schema | Description | Required |
| ------- | ------ | ------------------------------------------------------------------ | -------- |
| address | string | The algorand wallet address of the associated person | true |
| role | string | A short title for the role the associate plays within the project. | true |
eg:
```json
"associates": [
{
"address": "W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP",
"role": "Project Founder"
},
...
]
```
### Collections
[Section titled “Collections”](#collections)
NFT Collections have no formal standard for how they should be declared. This section aims to standardize the way that collections are declared & categorized. The collections field is an array of objects that contain the following fields:
| Field | Schema | Description | Required |
| ------------------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| name | string | The name of the collection | true |
| network | string | The blockchain network that the collection is minted on. *Default*: `algorand` *Special*: `multichain` | false |
| prefixes | array\ | An array of strings that represent the prefixes to match against the `unit_name` of the NFTs in the collection. | false |
| addresses | array\ | An array of strings that represent the addresses that minted the NFTs in the collection. | false |
| assets | array\ | An array of strings that represent the asset\_ids of the NFTs in the collection. | false |
| excluded\_assets | array\ | An array of strings that represent the asset\_ids of the NFTs in the collection that should be excluded. | false |
| artists | array\ | An array of strings that represent the addresses of the artists that created the NFTs in the collection. | false |
| banner\_image | string | An IPFS link to an image that represents the collection. *if set `banner_id` should be unset & vice-versa* | false |
| banner\_id | uint64 | An asset\_id that represents the collection. | false |
| avatar\_image | string | An IPFS link to an image that represents the collection. *if set `avatar_id` should be unset & vice-versa* | false |
| avatar\_id | uint64 | An asset\_id that represents the collection. | false |
| explicit | boolean | A boolean that represents whether or not the collection contains explicit content. | false |
| royalty\_percentage | uint64 | A uint64 with a value ranging from 0-10000 that represents the royalty percentage that the collection would prefer to take on secondary sales. | false |
| properties | array\ | An array of objects that represent traits from an entire collection. | false |
| extras | object | An object of key value pairs for any extra information that the project wants to include for the collection. | false |
eg:
```json
"collections": [
{
"name": "My Collection",
"networks": "algorand",
"prefixes": [
"AKC",
...
],
"addresses": [
"W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP",
...
],
"assets": [
123456789,
...
],
"excluded_assets": [
123456789,
...
],
"artists": [
"W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP",
...
],
"banner_image": "ipfs://...",
"avatar": 123456789,
"explicit": false,
"royalty_percentage": "750", // ie: 7.5%
"properties": [
{
"name": "Fur",
"values": [
{
"name": "Red",
"image": "ipfs://...",
"image_integrity": "sha256-...",
"image_mimetype": "image/png",
"animation_url": "ipfs://...",
"animation_url_integrity": "sha256-...",
"animation_url_mimetype": "image/gif",
"extras": {
"key": "value",
...
}
},
...
]
}
...
],
"extras": {
"key": "value",
...
}
},
...
]
```
#### Collection Scoping
[Section titled “Collection Scoping”](#collection-scoping)
Not all collections have been consistent with their naming conventions. Some collections are minted across multiple wallets due to prior asa minting limitations. The following fields used together offer great flexibility in creating a group of NFTs to include in a collection. `prefixes`, `addresses`, `assets`, `excluded_assets`. Combined, these fields allow for maximum flexibility for mints that may have mistakes or exist across wallets & dont all conform to a consistent standard.
`prefixes` allows for simple grouping of a set of NFTs based on the beginning part of the ASAs `unit_name`. This is useful for collections that have a consistent naming convention for their NFTs. Every other scoping field modifies this rule.
`addresses` scope down the collection to only include ASAs minted by the addresses listed in this field. This is useful for projects that mint different collections across multiple wallets that utilize the same prefix.
`assets` is a direct entry in the collection for NFTs that dont conform to any of the prefix rules.
`excluded_assets` is a direct exclusion on an NFT that may conform to a prefix but should be excluded from the collection.
`banner_image`, `banner_id`, `avatar_image`, `avatar_id` are all very self explanatory. They allow for a glancable preview of the collection to display on NFT marketplaces, analytics sites & others. Both `banner` & `avatar` field groups should be one or the other, not both. `banner_image` or `banner_id` (likely an ASA id from the creator). `avatar_image` or `avatar_id` (likely an ASA id from the collection).
`explicit` is a boolean that indicates whether or not the collection contains explicit content. This is useful for sites that want to filter out explicit content.
`properties` is an array of objects that represent traits from an entire collection. Many new NFT collections are choosing to use [ARC-19](/arc-standards/arc-0019) and mint their NFTs as blank slates. This can prevent sniping but also has the adverse affect of obscuring the trait information of a collection. This field allows for a collection to declare its traits, values, image previews of the trait it references and extra metadata.
#### Collection Properties
[Section titled “Collection Properties”](#collection-properties)
| Field | Schema | Description | Required |
| ------ | ------------------------------- | -------------------------------------------------------------- | -------- |
| name | string | The name of the property | true |
| values | array\ | An array of objects that represent the values of the property. | true |
#### Collection Property Values
[Section titled “Collection Property Values”](#collection-property-values)
| Field | Schema | Description | Required |
| ------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------- | -------- |
| name | string | The name of the value | true |
| image | string | An IPFS link to an image that represents the value. | false |
| image\_integrity | string | A sha256 hash of the image that represents the value. | false |
| image\_mimetype | string | The mimetype of the image that represents the value. | false |
| animation\_url | string | An IPFS link to an animation that represents the value. | false |
| animation\_url\_integrity | string | A sha256 hash of the animation that represents the value. | false |
| animation\_url\_mimetype | string | The mimetype of the animation that represents the value. | false |
| extras | object | An object of key value pairs for any extra information that the project wants to include for the property value. | false |
### Tokens
[Section titled “Tokens”](#tokens)
Tokens are a list of assets that are associated with the project. This can be used to verify the tokens of a project and for others to easily source images to represent the token on their own platforms.
| Field | Schema | Description | Required |
| ---------------- | ------ | ----------------------------------------------------- | -------- |
| asset\_id | uint64 | The asset\_id of the token | true |
| image | string | An IPFS link to an image that represents the token. | false |
| image\_integrity | string | A sha256 hash of the image that represents the token. | false |
| image\_mimetype | string | The mimetype of the image that represents the token. | false |
eg:
```json
"tokens": [
{
"asset_id": 123456789,
"image": "ipfs://...",
"image_integrity": "sha256-...",
"image_mimetype": "image/png",
}
...
]
```
### FAQ
[Section titled “FAQ”](#faq)
Frequently Asked Questions for the project to address the common questions people have about their project and help inform the community.
| Field | Schema | Description | Required |
| ----- | ------ | ------------ | -------- |
| q | string | The question | true |
| a | string | The answer | true |
eg:
```json
"faq": [
{
"q": "What is XYZ Collection?",
"a": "XYZ Collection is a premiere NFT project that..."
},
...
]
```
### Extras
[Section titled “Extras”](#extras)
Custom Metadata for extending & customizing the declaration for your own use cases. This object can be found at several levels throughout the specification, The top level, within collections & within collection property value objects.
| Field | Schema | Description | Required |
| ----- | ------ | ---------------------------------- | -------- |
| key | string | The key of the extra information | true |
| value | string | The value of the extra information | true |
eg:
```json
"extras": {
"key": "value",
...
}
```
### Contract Providers
[Section titled “Contract Providers”](#contract-providers)
Custom metadata needs to be verifiable and many projects use many wallets as a means of separating concerns. Providers are smart contracts that have the capability of verifying multiple wallets & thus provide evidence to parsers of the authenticity of such data. Providers that support this standard will be listed on the [ARC compatibility matrices](https://arc.algorand.foundation/) site.
## Rationale
[Section titled “Rationale”](#rationale)
See the motivation section above for the general rationale.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# ASA Burning App
> Standardized Application for Burning ASAs
## Abstract
[Section titled “Abstract”](#abstract)
This ARC provides TEAL which would deploy a application that can be used for burning Algorand Standard Assets. The goal is to have the apps deployed on the public networks using this TEAL to provide a standardized burn address and app ID.
## Motivation
[Section titled “Motivation”](#motivation)
Currently there is no official way to burn ASAs. While one can currently deploy their own app or rekey an account holding the asset to some other address, having a standardized address for burned assets enables explorers and dapps to easily calculate and display burnt supply for any ASA burned here.
### Definitions Related to Token Supply & Burning
[Section titled “Definitions Related to Token Supply & Burning”](#definitions-related-to-token-supply--burning)
It is important to note that assets with clawback enabled are effectively impossible to “burn” and could at any point be clawed back from any account or contract. The definitions below attempt to clarify some terminology around tokens and what can be considered burned.
| Token Type | Clawback | No Clawback |
| ------------------ | ---------------------------------------------------- | ---------------------------------------------------- |
| Total Supply | Total | Total |
| Circulating Supply | Total - Qty in Reserve Address - Qty in burn address | Total - Qty in Reserve Address - Qty in burn address |
| Available Supply | Total | Total - Qty in burn address |
| Burned Supply | N/A (Impossible to burn) | Qty in burn address |
## Specification
[Section titled “Specification”](#specification)
### `ARC-4` JSON Description
[Section titled “ARC-4 JSON Description”](#arc-4-json-description)
```json
{
"name": "ARC54",
"desc": "Standardized application for burning ASAs",
"methods": [
{
"name": "arc54_optIntoASA",
"args": [
{
"name": "asa",
"type": "asset",
"desc": "The asset to which the contract will opt in"
}
],
"desc": "A method to opt the contract into an ASA",
"returns": {
"type": "void",
"desc": ""
}
},
{
"name": "createApplication",
"desc": "",
"returns": {
"type": "void",
"desc": ""
},
"args": []
}
]
}
```
## Rationale
[Section titled “Rationale”](#rationale)
This simple application is only able to opt in to ASAs but not send them. As such, once an ASA has been sent to the app address it is effectively burnt.
If the burned ASA does not have clawback enabled, it will remain permanently in this account and can be considered out of circulation.
The app will accept ASAs which have clawback enabled, but any such assets can never be considered permanently burned. Users may use the burning app as a convenient receptable to remove ASAs from their account rather than returning them to the creator account.
The app will, of course, only be able to opt into a new ASA if it has sufficient Algo balance to cover the increase minimum balance requirement (MBR). Callers should fund the contract account as needed to cover the opt-in requests. It is possible for the contract to be funded by donated Algo so that subsequent callers need not pay the MBR requirement to request new ASA opt-ins.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### TEAL Approval Program
[Section titled “TEAL Approval Program”](#teal-approval-program)
```plaintext
#pragma version 9
// This TEAL was generated by TEALScript v0.62.2
// https://github.com/algorandfoundation/TEALScript
// This contract is compliant with and/or implements the following ARCs: [ ARC4 ]
// The following ten lines of TEAL handle initial program flow
// This pattern is used to make it easy for anyone to parse the start of the program and determine if a specific action is allowed
// Here, action refers to the OnComplete in combination with whether the app is being created or called
// Every possible action for this contract is represented in the switch statement
// If the action is not implemented in the contract, its respective branch will be "NOT_IMPLEMENTED" which just contains "err"
txn ApplicationID
int 0
>
int 6
*
txn OnCompletion
+
switch create_NoOp NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED call_NoOp
NOT_IMPLEMENTED:
err
// arc54_optIntoASA(asset)void
//
// /*
// Sends an inner transaction to opt the contract account into an ASA.
// The fee for the inner transaction must be covered by the caller.
//
// @param asa The ASA to opt in to
abi_route_arc54_optIntoASA:
// asa: asset
txna ApplicationArgs 1
btoi
txnas Assets
// execute arc54_optIntoASA(asset)void
callsub arc54_optIntoASA
int 1
return
arc54_optIntoASA:
proto 1 0
// contracts/arc54.algo.ts:13
// sendAssetTransfer({
// assetReceiver: globals.currentApplicationAddress,
// xferAsset: asa,
// assetAmount: 0,
// fee: 0,
// })
itxn_begin
int axfer
itxn_field TypeEnum
// contracts/arc54.algo.ts:14
// assetReceiver: globals.currentApplicationAddress
global CurrentApplicationAddress
itxn_field AssetReceiver
// contracts/arc54.algo.ts:15
// xferAsset: asa
frame_dig -1 // asa: asset
itxn_field XferAsset
// contracts/arc54.algo.ts:16
// assetAmount: 0
int 0
itxn_field AssetAmount
// contracts/arc54.algo.ts:17
// fee: 0
int 0
itxn_field Fee
// Submit inner transaction
itxn_submit
retsub
abi_route_createApplication:
int 1
return
create_NoOp:
method "createApplication()void"
txna ApplicationArgs 0
match abi_route_createApplication
err
call_NoOp:
method "arc54_optIntoASA(asset)void"
txna ApplicationArgs 0
match abi_route_arc54_optIntoASA
err
```
### TealScript Source Code
[Section titled “TealScript Source Code”](#tealscript-source-code)
```plaintext
import { Contract } from '@algorandfoundation/tealscript';
// eslint-disable-next-line no-unused-vars
class ARC54 extends Contract {
/*
* Sends an inner transaction to opt the contract account into an ASA.
* The fee for the inner transaction must be covered by the caller.
*
* @param asa The ASA to opt in to
*/
arc54_optIntoASA(asa: Asset): void {
sendAssetTransfer({
assetReceiver: globals.currentApplicationAddress,
xferAsset: asa,
assetAmount: 0,
fee: 0,
});
}
}
```
### Deployments
[Section titled “Deployments”](#deployments)
An application per the above reference implementation has been deployed to each of Algorand’s networks at these app IDs:
| Network | App ID | Address |
| ------- | ---------- | ---------------------------------------------------------- |
| MainNet | 1257620981 | BNFIREKGRXEHCFOEQLTX3PU5SUCMRKDU7WHNBGZA4SXPW42OAHZBP7BPHY |
| TestNet | 497806551 | 3TKF2GMZJ5VZ4BQVQGC72BJ63WFN4QBPU2EUD4NQYHFLC3NE5D7GXHXYOQ |
| BetaNet | 2019020358 | XRXCALSRDVUY2OQXWDYCRMHPCF346WKIV5JPAHXQ4MZADSROJGDIHZP7AI |
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
It should be noted that once an asset is sent to the contract there will be no way to recover the asset unless it has clawback enabled.
Due to the simplicity of a TEAL, an audit is not needed. The contract has no code paths which can send tokens, thus there is no concern of an exploit that undoes burning of ASAs without clawback.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# On-Chain storage/transfer for Multisig
> A smart contract that stores transactions and signatures for simplified multisignature use on Algorand.
## Abstract
[Section titled “Abstract”](#abstract)
This ARC proposes the utilization of on-chain smart contracts to facilitate the storage and transfer of Algorand multisignature metadata, transactions, and corresponding signatures for the respective multisignature sub-accounts.
## Motivation
[Section titled “Motivation”](#motivation)
Multisignature (multisig) accounts play a crucial role in enhancing security and control within the Algorand ecosystem. However, the management of multisig accounts often involves intricate off-chain coordination and the distribution of transactions among authorized signers. There exists a pressing need for a more streamlined and simplified approach to multisig utilization, along with an efficient transaction signing workflow.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### ABI
[Section titled “ABI”](#abi)
A compliant smart contract, conforming to this ARC, **MUST** implement the following interface:
```json
{
"name": "ARC-55",
"desc": "On-Chain Msig App",
"methods": [
{
"name": "arc55_getThreshold",
"desc": "Retrieve the signature threshold required for the multisignature to be submitted",
"readonly": true,
"args": [],
"returns": {
"type": "uint64",
"desc": "Multisignature threshold"
}
},
{
"name": "arc55_getAdmin",
"desc": "Retrieves the admin address, responsible for calling arc55_setup",
"readonly": true,
"args": [],
"returns": {
"type": "address",
"desc": "Admin address"
}
},
{
"name": "arc55_nextTransactionGroup",
"readonly": true,
"args": [],
"returns": {
"type": "uint64",
"desc": "Next expected Transaction Group nonce"
}
},
{
"name": "arc55_getTransaction",
"desc": "Retrieve a transaction from a given transaction group",
"readonly": true,
"args": [
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "transactionIndex",
"type": "uint8",
"desc": "Index of transaction within group"
}
],
"returns": {
"type": "byte[]",
"desc": "A single transaction at the specified index for the transaction group nonce"
}
},
{
"name": "arc55_getSignatures",
"desc": "Retrieve a list of signatures for a given transaction group nonce and address",
"readonly": true,
"args": [
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "signer",
"type": "address",
"desc": "Address you want to retrieve signatures for"
}
],
"returns": {
"type": "byte[64][]",
"desc": "Array of signatures"
}
},
{
"name": "arc55_getSignerByIndex",
"desc": "Find out which address is at this index of the multisignature",
"readonly": true,
"args": [
{
"name": "index",
"type": "uint64",
"desc": "Address at this index of the multisignature"
}
],
"returns": {
"type": "address",
"desc": "Address at index"
}
},
{
"name": "arc55_isSigner",
"desc": "Check if an address is a member of the multisignature",
"readonly": true,
"args": [
{
"name": "address",
"type": "address",
"desc": "Address to check is a signer"
}
],
"returns": {
"type": "bool",
"desc": "True if address is a signer"
}
},
{
"name": "arc55_mbrSigIncrease",
"desc": "Calculate the minimum balance requirement for storing a signature",
"readonly": true,
"args": [
{
"name": "signaturesSize",
"type": "uint64",
"desc": "Size (in bytes) of the signatures to store"
}
],
"returns": {
"type": "uint64",
"desc": "Minimum balance requirement increase"
}
},
{
"name": "arc55_mbrTxnIncrease",
"desc": "Calculate the minimum balance requirement for storing a transaction",
"readonly": true,
"args": [
{
"name": "transactionSize",
"type": "uint64",
"desc": "Size (in bytes) of the transaction to store"
}
],
"returns": {
"type": "uint64",
"desc": "Minimum balance requirement increase"
}
},
{
"name": "arc55_setup",
"desc": "Setup On-Chain Msig App. This can only be called whilst no transaction groups have been created.",
"args": [
{
"name": "threshold",
"type": "uint8",
"desc": "Initial multisig threshold, must be greater than 0"
},
{
"name": "addresses",
"type": "address[]",
"desc": "Array of addresses that make up the multisig"
}
],
"returns": {
"type": "void"
}
},
{
"name": "arc55_newTransactionGroup",
"desc": "Generate a new transaction group nonce for holding pending transactions",
"args": [],
"returns": {
"type": "uint64",
"desc": "transactionGroup Transaction Group nonce"
}
},
{
"name": "arc55_addTransaction",
"desc": "Add a transaction to an existing group. Only one transaction should be included per call",
"args": [
{
"name": "costs",
"type": "pay",
"desc": "Minimum Balance Requirement for associated box storage costs: (2500) + (400 * (9 + transaction.length))"
},
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "index",
"type": "uint8",
"desc": "Transaction position within atomic group to add"
},
{
"name": "transaction",
"type": "byte[]",
"desc": "Transaction to add"
}
],
"returns": {
"type": "void"
},
"events": [
{
"name": "TransactionAdded",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "transactionIndex",
"type": "uint8"
}
],
"desc": "Emitted when a new transaction is added to a transaction group"
}
]
},
{
"name": "arc55_addTransactionContinued",
"args": [
{
"name": "transaction",
"type": "byte[]"
}
],
"returns": {
"type": "void"
}
},
{
"name": "arc55_removeTransaction",
"desc": "Remove transaction from the app. The MBR associated with the transaction will be returned to the transaction sender.",
"args": [
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "index",
"type": "uint8",
"desc": "Transaction position within atomic group to remove"
}
],
"returns": {
"type": "void"
},
"events": [
{
"name": "TransactionRemoved",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "transactionIndex",
"type": "uint8"
}
],
"desc": "Emitted when a transaction has been removed from a transaction group"
}
]
},
{
"name": "arc55_setSignatures",
"desc": "Set signatures for a particular transaction group. Signatures must be included as an array of byte-arrays",
"args": [
{
"name": "costs",
"type": "pay",
"desc": "Minimum Balance Requirement for associated box storage costs: (2500) + (400 * (40 + signatures.length))"
},
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "signatures",
"type": "byte[64][]",
"desc": "Array of signatures"
}
],
"returns": {
"type": "void"
},
"events": [
{
"name": "SignatureSet",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "signer",
"type": "address"
}
],
"desc": "Emitted when a new signature is added to a transaction group"
}
]
},
{
"name": "arc55_clearSignatures",
"desc": "Clear signatures for an address. Be aware this only removes it from the current state of the ledger, and indexers will still know and could use your signature",
"args": [
{
"name": "transactionGroup",
"type": "uint64",
"desc": "Transaction Group nonce"
},
{
"name": "address",
"type": "address",
"desc": "Address whose signatures to clear"
}
],
"returns": {
"type": "void"
},
"events": [
{
"name": "SignatureSet",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "signer",
"type": "address"
}
],
"desc": "Emitted when a new signature is added to a transaction group"
}
]
},
{
"name": "createApplication",
"args": [],
"returns": {
"type": "void"
}
}
],
"events": [
{
"name": "TransactionAdded",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "transactionIndex",
"type": "uint8"
}
],
"desc": "Emitted when a new transaction is added to a transaction group"
},
{
"name": "TransactionRemoved",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "transactionIndex",
"type": "uint8"
}
],
"desc": "Emitted when a transaction has been removed from a transaction group"
},
{
"name": "SignatureSet",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "signer",
"type": "address"
}
],
"desc": "Emitted when a new signature is added to a transaction group"
},
{
"name": "SignatureCleared",
"args": [
{
"name": "transactionGroup",
"type": "uint64"
},
{
"name": "signer",
"type": "address"
}
],
"desc": "Emitted when a signature has been removed from a transaction group"
}
]
}
```
### Usage
[Section titled “Usage”](#usage)
The deployment of an [ARC-55](/arc-standards/arc-0055)-compliant contract is not covered by the ARC and is instead left to the implementer for their own use-case. An internal function `arc55_setAdmin` **SHOULD** be used to initialize an address which will be administering the setup. If left unset, then the admin defaults to the creator address. Once the application exists on-chain it must be setup before it can be used. The ARC-55 admin is responsible for setting up the multisignature metadata using the `arc55_setup(uint8,address[])void` method, and passing in details about the signature threshold and signer accounts that will make up the multisignature address. After successful deployment and configuration, the application ID **SHOULD** be distributed among the involved parties (signers) as a one-time off-chain exchange. The setup process may be called multiple times to correct any changes to the multisignature metadata, as long as no one has created a new transaction group nonce. Once a transaction group nonce has been generated, the metadata is immutable.
Before any transactions or signatures can be stored, a new “transaction group nonce” must be generated using the `arc55_newTransactionGroup()uint64` method. This returns a unique value which **MUST** be used for all further [ARC-55](/arc-standards/arc-0055) interactions. This nonce value allows multiple pending transactions groups to be available simultaneously under the same contract deployment. Do note confuse this value with a transaction group hash. It’s entirely possible to add multiple non-grouped, or multiple different groups into a single transaction group nonce, up to a limit of 255 transactions. However it’s unlikely ARC-55 clients will facilitate this.
Using a transaction group nonce, the admin or any signer **MAY** add transactions one at a time to that transaction group by providing the transaction data and the index of that transaction within the group using `arc55_addTransaction(pay,uint64,uint8,byte[])void`. A mandatory payment transaction **MUST** be included before the application call and will contain any minimum balance requirements as a result of storing the transaction data. When adding transactions the index **MUST** start at 0. Once a transaction has successfully be used or is no longer needed, any signer **MAY** remove the transaction data from the group using the `arc55_removeTransaction(uint64,uint8)void` method. This will result in the minimum balance requirement being freed up and being sent to the transaction sender.
Signers **MAY** provide their signature for a particular transaction group by using the `arc55_setSignatures(pay,uint64,byte[64][])void` method. This requires paying the minimum balance requirement used to store their signature and will be returned to them once their signature is removed. Any signer **MAY** also remove their own or others signatures from the contract using the `arc55_clearSignatures(uint64)void` method, however this may not prevent someone from using that signature. Once a signature has been shared publicly, anyone can use it assuming they meet the signature threshold to submit the transaction.
Once a transaction receives enough signatures to meet the threshold and falls within the valid rounds of the transaction, anyone **MAY** construct the multisignature transaction, by including all the signatures and submitting it to the network. Subsequently, participants **SHOULD** now clear the signatures and transaction data from the contract.
Whilst it’s not part of the ARC, an [ARC-55](/arc-standards/arc-0055)-compliant contract **MAY** be destroyed once it is no longer needed. The process **SHOULD** be performed by the admin and/or application creator, by first reclaiming any outstanding Algo funds by removing transactions and clearing signatures, which avoids permanently locking Algo on the network. Then issuing the `DeleteApplication` call and closing out the application address. It’s important to note that destroying the application does not render the multisignature account inaccessible, as a new deployment with the same multisignature metadata can be configured and used.
Below is a typical expected lifecycle:
* Creator deploys an ARC-55 compliant smart contract.
* Admin performs setup: Setting threshold to 2, and including 2 signer addresses.
* Either signer can now generate a new transaction group.
* Either signer can add a new transaction to sign to the transaction group, providing the MBR.
* Signer 1 provides their signatures to the transaction group, providing their MBR.
* Signer 2 provides their signatures to the transaction group, providing their MBR.
* Anyone can now submit the transaction to the network.
* Either signer can now clear the signatures of each signer, refunding their MBR to each account.
* Either signer can remove the transaction since it’s now committed to the network, refunding the MBR to the transaction sender.
### Storage
[Section titled “Storage”](#storage)
```plaintext
n = Transaction group nonce (uint64)
i = Transaction index within group (uint8)
addr = signers address (byte[32])
```
| Type | Key | Value | Description |
| ------ | ----------------- | ------- | ------------------------------------------------------------ |
| Global | `arc55_threshold` | uint64 | The multisig signature threshold |
| Global | `arc55_nonce` | uint64 | The ARC-55 transaction group nonce |
| Global | `arc55_admin` | Address | The admin responsible for calling `arc55_setup` |
| Box | n+i | byte\[] | The ith transaction data for the nth transaction group nonce |
| Box | n+addr | byte\[] | The signatures for the nth transaction group |
| Global | uint8 | Address | The signer address index for the multisig |
| Global | Address | uint64 | The number of times this signer appears in the multisig |
Whilst the data can be read directly from the applications storage, there are also read-only method for use with Algod’s simulate to retrieve the data. Below is a summary of each piece of data, how and where it’s stored, and it’s associated method call.
#### Threshold
[Section titled “Threshold”](#threshold)
The threshold is stored in global state of the application as a uint64 value. It’s immutable after setup and the first transaction group nonce has been generated.
The associated read-only method is `arc55_getThreshold()uint64`, which will return the signature threshold for the multisignature account.
#### Multisig Signer Addresses
[Section titled “Multisig Signer Addresses”](#multisig-signer-addresses)
A multisignature address is made up of one or more addresses. The contract stores these addresses in global state twice. Once as the positional index, and a second time to identify how many times they’re being used. This allows for simpler on-chain processing within the smart contract to identify 1) if the account is used, and 2) where the account should be used when reconstructing the multisignature.
Their are two associated read-only methods for obtaining and checking multisignature signer addresses. To retrieve a list of index addresses, you **SHOULD** use `arc55_getSignerByIndex(uint64)address`, which will return the signer address at the given multisignature index. This can be done incrementally until you reach the end of the available indexes. To check if an address is a signer for the multisignature account, you **SHOULD** use `arc55_isSigner(address)boolean`, which will return a `true` or `false` value.
#### Transactions
[Section titled “Transactions”](#transactions)
All transactions are stored individually within boxes, where the name of the box are separately identified by their related transaction group nonce. The box names are a concatenation of a uint64 and a uint8, representing the transaction group nonce and transaction index. This allows off-chain services to list all boxes belonging to an application and can quickly group and identify how many transaction groups and transactions are available.
The associated read-only method is `arc55_getTransaction(uint64,uint8)byte[]`, which will return the transaction for a given transaction group nonce and transaction index. Note: To retrieve data larger than 1024 bytes, simulate must be called with `AllowMoreLogging` set to true.
Example Group Transaction Nonce: `1` (uint64) Transaction Index: `0` (uint8) Hex: `000000000000000100` Box name: `AAAAAAAAAAEA` (base64)
#### Signatures
[Section titled “Signatures”](#signatures)
Signers store their signatures in a single box per transaction group nonce. Where multiple signatures **MUST** be concatenated together in the same order as the transactions within the group. The box name is made up of the transaction group nonce and the signers public key. Which is later used when removing the signatures, to identify where to refund the minimum balance requirement to.
The associated read-only method is `arc55_getSignatures(uint64,address)byte[64][]`, which will return the signatures for a given transaction group nonce and signer address.
Example Group Transaction Nonce: `1` (uint64) Signer: `ALICE7Y2JOFGG2VGUC64VINB75PI56O6M2XW233KG2I3AIYJFUD4QMYTJM` (address) Hex: `000000000000000102d0227f1a4b8a636aa6a0bdcaa1a1ff5e8ef9de66af6d6f6a3691b023092d07` Box name: `AAAAAAAAAAEC0CJ/GkuKY2qmoL3KoaH/Xo753mavbW9qNpGwIwktBw==` (base64)
## Rationale
[Section titled “Rationale”](#rationale)
Establishing individual deployments for distinct user groups, as opposed to relying on a singular instance accessible to all, presents numerous advantages. Initially, this approach facilitates the implementation and expansion of functionalities well beyond the scope initially envisioned by the ARC. It enables the integration of entirely customized smart contracts that adhere to [ARC-55](/arc-standards/arc-0055) while avoiding being constrained by it.
Furthermore, in the context of third-party infrastructures, the management of numerous boxes for a singular monolithic application can become increasingly cumbersome over time. In contrast, empowering small groups to create their own multisig applications, they can subscribe exclusively to their unique application ID streamlining the monitoring of it for new transactions and signatures.
### Limitations and Design Decisions
[Section titled “Limitations and Design Decisions”](#limitations-and-design-decisions)
The available transaction size is the most critical limitation within this implementation. For transactions larger than 2048 bytes (the maximum application argument size), additional transactions using the method `arc55_addTransactionContinued(byte[])void` can be used and sent within the same group as the `arc55_addTransaction(pay,uint64,uint8,byte[])void` call. This will allow the storing of up to 4096 bytes per transaction. Note: The minimum balance requirement must be paid in full by the preceding payment transaction of the `addTransaction` call.
This ARC inherently promotes transparency of transactions and signers. If an additional layer of anonymity is required, an extension to this ARC **SHOULD** be proposed, outlining how to store and share encrypted data.
The current design necessitates that all transactions within the group be exclusively signed by the constituents of the multisig account. If a group transaction requires a separate signature from another account or a logicsig, this design does not support it. An extension to this ARC **SHOULD** be considered to address such scenarios.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
A TEALScript reference implementation is available at [`github.com/nullun/arc55-msig-app`](https://github.com/nullun/arc55-msig-app). This version has been written as an inheritable class, so can be included on top of an existing project to give you an ARC-55-compliant interface. It is encouraged for others to implement this standard in their preferred smart contract language of choice and even extend the capabilities whilst adhering to the provided ABI specification.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
This ARC’s design solely involves storing existing data structures and does not have the capability to create or use multisignature accounts. Therefore, the security implications are minimal. End users are expected to review each transaction before generating a signature for it. If a smart contract implementing this ARC lacks proper security checks, the worst-case scenario would involve incorrect transactions and invalid signatures being stored on-chain, along with the potential loss of the minimum balance requirement from the application account.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Extended App Description
> Adds information to the ABI JSON description
## Abstract
[Section titled “Abstract”](#abstract)
This ARC takes the existing JSON description of a contract as described in [ARC-4](/arc-standards/arc-0004) and adds more fields for the purpose of client interaction
## Motivation
[Section titled “Motivation”](#motivation)
The data provided by ARC-4 is missing a lot of critical information that clients should know when interacting with an app. This means ARC-4 is insufficient to generate type-safe clients that provide a superior developer experience.
On the other hand, [ARC-32](/arc-standards/arc-0032) provides the vast majority of useful information that can be used to [generate typed clients](https://github.com/algorandfoundation/algokit-cli/blob/main/docs/features/generate.md#1-typed-clients), but requires a separate JSON file on top of the ARC-4 json file, which adds extra complexity and cognitive overhead.
## Specification
[Section titled “Specification”](#specification)
### Contract Interface
[Section titled “Contract Interface”](#contract-interface)
Every application is described via the following interface which is an extension of the `Contract` interface described in [ARC-4](/arc-standards/arc-0004).
```ts
/** Describes the entire contract. This interface is an extension of the interface described in ARC-4 */
interface Contract {
/** The ARCs used and/or supported by this contract. All contracts implicitly support ARC4 and ARC56 */
arcs: number[];
/** A user-friendly name for the contract */
name: string;
/** Optional, user-friendly description for the interface */
desc?: string;
/**
* Optional object listing the contract instances across different networks.
* The key is the base64 genesis hash of the network, and the value contains
* information about the deployed contract in the network indicated by the
* key. A key containing the human-readable name of the network MAY be
* included, but the corresponding genesis hash key MUST also be defined
*/
networks?: {
[network: string]: {
/** The app ID of the deployed contract in this network */
appID: number;
};
};
/** Named structs used by the application. Each struct field appears in the same order as ABI encoding. */
structs: { [structName: StructName]: StructField[] };
/** All of the methods that the contract implements */
methods: Method[];
state: {
/** Defines the values that should be used for GlobalNumUint, GlobalNumByteSlice, LocalNumUint, and LocalNumByteSlice when creating the application */
schema: {
global: {
ints: number;
bytes: number;
};
local: {
ints: number;
bytes: number;
};
};
/** Mapping of human-readable names to StorageKey objects */
keys: {
global: { [name: string]: StorageKey };
local: { [name: string]: StorageKey };
box: { [name: string]: StorageKey };
};
/** Mapping of human-readable names to StorageMap objects */
maps: {
global: { [name: string]: StorageMap };
local: { [name: string]: StorageMap };
box: { [name: string]: StorageMap };
};
};
/** Supported bare actions for the contract. An action is a combination of call/create and an OnComplete */
bareActions: {
/** OnCompletes this method allows when appID === 0 */
create: ("NoOp" | "OptIn" | "DeleteApplication")[];
/** OnCompletes this method allows when appID !== 0 */
call: (
| "NoOp"
| "OptIn"
| "CloseOut"
| "UpdateApplication"
| "DeleteApplication"
)[];
};
/** Information about the TEAL programs */
sourceInfo?: {
/** Approval program information */
approval: ProgramSourceInfo;
/** Clear program information */
clear: ProgramSourceInfo;
};
/** The pre-compiled TEAL that may contain template variables. MUST be omitted if included as part of ARC23 */
source?: {
/** The approval program */
approval: string;
/** The clear program */
clear: string;
};
/** The compiled bytecode for the application. MUST be omitted if included as part of ARC23 */
byteCode?: {
/** The approval program */
approval: string;
/** The clear program */
clear: string;
};
/** Information used to get the given byteCode and/or PC values in sourceInfo. MUST be given if byteCode or PC values are present */
compilerInfo?: {
/** The name of the compiler */
compiler: "algod" | "puya";
/** Compiler version information */
compilerVersion: {
major: number;
minor: number;
patch: number;
commitHash?: string;
};
};
/** ARC-28 events that MAY be emitted by this contract */
events?: Array;
/** A mapping of template variable names as they appear in the TEAL (not including TMPL_ prefix) to their respective types and values (if applicable) */
templateVariables?: {
[name: string]: {
/** The type of the template variable */
type: ABIType | AVMType | StructName;
/** If given, the base64 encoded value used for the given app/program */
value?: string;
};
};
/** The scratch variables used during runtime */
scratchVariables?: {
[name: string]: {
slot: number;
type: ABIType | AVMType | StructName;
};
};
}
```
### Method Interface
[Section titled “Method Interface”](#method-interface)
Every method in the contract is described via a `Method` interface. This interface is an extension of the one defined in [ARC-4](/arc-standards/arc-0004).
```ts
/** Describes a method in the contract. This interface is an extension of the interface described in ARC-4 */
interface Method {
/** The name of the method */
name: string;
/** Optional, user-friendly description for the method */
desc?: string;
/** The arguments of the method, in order */
args: Array<{
/** The type of the argument. The `struct` field should also be checked to determine if this arg is a struct. */
type: ABIType;
/** If the type is a struct, the name of the struct */
struct?: StructName;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
/** The default value that clients should use. */
defaultValue?: {
/** Where the default value is coming from
* - box: The data key signifies the box key to read the value from
* - global: The data key signifies the global state key to read the value from
* - local: The data key signifies the local state key to read the value from (for the sender)
* - literal: the value is a literal and should be passed directly as the argument
* - method: The utf8 signature of the method in this contract to call to get the default value. If the method has arguments, they all must have default values. The method **MUST** be readonly so simulate can be used to get the default value.
*/
source: "box" | "global" | "local" | "literal" | "method";
/** Base64 encoded bytes, base64 ARC4 encoded uint64, or UTF-8 method selector */
data: string;
/** How the data is encoded. This is the encoding for the data provided here, not the arg type. Undefined if the data is method selector */
type?: ABIType | AVMType;
};
}>;
/** Information about the method's return value */
returns: {
/** The type of the return value, or "void" to indicate no return value. The `struct` field should also be checked to determine if this return value is a struct. */
type: ABIType;
/** If the type is a struct, the name of the struct */
struct?: StructName;
/** Optional, user-friendly description for the return value */
desc?: string;
};
/** an action is a combination of call/create and an OnComplete */
actions: {
/** OnCompletes this method allows when appID === 0 */
create: ("NoOp" | "OptIn" | "DeleteApplication")[];
/** OnCompletes this method allows when appID !== 0 */
call: (
| "NoOp"
| "OptIn"
| "CloseOut"
| "UpdateApplication"
| "DeleteApplication"
)[];
};
/** If this method does not write anything to the ledger (ARC-22) */
readonly?: boolean;
/** ARC-28 events that MAY be emitted by this method */
events?: Array;
/** Information that clients can use when calling the method */
recommendations?: {
/** The number of inner transactions the caller should cover the fees for */
innerTransactionCount?: number;
/** Recommended box references to include */
boxes?: {
/** The app ID for the box */
app?: number;
/** The base64 encoded box key */
key: string;
/** The number of bytes being read from the box */
readBytes: number;
/** The number of bytes being written to the box */
writeBytes: number;
};
/** Recommended foreign accounts */
accounts?: string[];
/** Recommended foreign apps */
apps?: number[];
/** Recommended foreign assets */
assets?: number[];
};
}
```
### Event Interface
[Section titled “Event Interface”](#event-interface)
[ARC-28](/arc-standards/arc-0028) events are described using an extension of the original interface described in the ARC, with the addition of an optional struct field for arguments
```ts
interface Event {
/** The name of the event */
name: string;
/** Optional, user-friendly description for the event */
desc?: string;
/** The arguments of the event, in order */
args: Array<{
/** The type of the argument. The `struct` field should also be checked to determine if this arg is a struct. */
type: ABIType;
/** Optional, user-friendly name for the argument */
name?: string;
/** Optional, user-friendly description for the argument */
desc?: string;
/** If the type is a struct, the name of the struct */
struct?: StructName;
}>;
}
```
### Type Interfaces
[Section titled “Type Interfaces”](#type-interfaces)
The types defined in [ARC-4](/arc-standards/arc-0004) may not fully described the best way to use the ABI values as intended by the contract developers. These type interfaces are intended to supplement ABI types so clients can interact with the contract as intended.
```ts
/** An ABI-encoded type */
type ABIType = string;
/** The name of a defined struct */
type StructName = string;
/** Raw byteslice without the length prefixed that is specified in ARC-4 */
type AVMBytes = "AVMBytes";
/** A utf-8 string without the length prefix that is specified in ARC-4 */
type AVMString = "AVMString";
/** A 64-bit unsigned integer */
type AVMUint64 = "AVMUint64";
/** A native AVM type */
type AVMType = AVMBytes | AVMString | AVMUint64;
/** Information about a single field in a struct */
interface StructField {
/** The name of the struct field */
name: string;
/** The type of the struct field's value */
type: ABIType | StructName | StructField[];
}
```
### Storage Interfaces
[Section titled “Storage Interfaces”](#storage-interfaces)
These interfaces properly describe how app storage is access within the contract
```ts
/** Describes a single key in app storage */
interface StorageKey {
/** Description of what this storage key holds */
desc?: string;
/** The type of the key */
keyType: ABIType | AVMType | StructName;
/** The type of the value */
valueType: ABIType | AVMType | StructName;
/** The bytes of the key encoded as base64 */
key: string;
}
/** Describes a mapping of key-value pairs in storage */
interface StorageMap {
/** Description of what the key-value pairs in this mapping hold */
desc?: string;
/** The type of the keys in the map */
keyType: ABIType | AVMType | StructName;
/** The type of the values in the map */
valueType: ABIType | AVMType | StructName;
/** The base64-encoded prefix of the map keys*/
prefix?: string;
}
```
### SourceInfo Interface
[Section titled “SourceInfo Interface”](#sourceinfo-interface)
These interfaces give clients more information about the contract’s source code.
```ts
interface ProgramSourceInfo {
/** The source information for the program */
sourceInfo: SourceInfo[];
/** How the program counter offset is calculated
* - none: The pc values in sourceInfo are not offset
* - cblocks: The pc values in sourceInfo are offset by the PC of the first op following the last cblock at the top of the program
*/
pcOffsetMethod: "none" | "cblocks";
}
interface SourceInfo {
/** The program counter value(s). Could be offset if pcOffsetMethod is not "none" */
pc: Array;
/** A human-readable string that describes the error when the program fails at the given PC */
errorMessage?: string;
/** The TEAL line number that corresponds to the given PC. RECOMMENDED to be used for development purposes, but not required for clients */
teal?: number;
/** The original source file and line number that corresponds to the given PC. RECOMMENDED to be used for development purposes, but not required for clients */
source?: string;
}
```
### Template Variables
[Section titled “Template Variables”](#template-variables)
Template variables are variables in the TEAL that should be substitued prior to compilation. The usage of the variable **MUST** appear in the TEAL starting with `TMPL_`. Template variables **MUST** be an argument to either `bytecblock` or `intcblock`. If a program has template variables, `bytecblock` and `intcblock` **MUST** be the first two opcodes in the program (unless one is not used).
#### Example
[Section titled “Example”](#example)
```js
#pragma version 10
bytecblock 0xdeadbeef TMPL_FOO
intcblock 0x12345678 TMPL_BAR
```
### Dynamic Template Variables
[Section titled “Dynamic Template Variables”](#dynamic-template-variables)
When a program has a template variable with a dynamic length, the `pcOffsetMethod` in `ProgramSourceInfo` **MUST** be `cblocks`. The `pc` value in each `SourceInfo` **MUST** be the pc determined at compilation minus the last `pc` value of the last `cblock` at compilation.
When a client is leveraging a source map with `cblocks` as the `pcOffsetMethod`, it **MUST** determine the `pc` value by parsing the bytecode to get the PC value of the first op following the last `cblock` at the top of the program. See the reference implementation section for an example of how to do this.
## Rationale
[Section titled “Rationale”](#rationale)
ARC-32 essentially addresses the same problem, but it requires the generation of two separate JSON files and the ARC-32 JSON file contains the ARC-4 JSON file within it (redundant information). The goal of this ARC is to create one JSON schema that is backwards compatible with ARC-4 clients, but contains the relevant information needed to automatically generate comprehensive client experiences.
### State
[Section titled “State”](#state)
Describes all of the state that MAY exist in the app and how one should decode values. The schema provides the required schema when creating the app.
### Named Structs
[Section titled “Named Structs”](#named-structs)
It is common for high-level languages to support named structs, which gives names to the indexes of elements in an ABI tuple. The same structs should be useable on the client-side just as they are used in the contract.
### Action
[Section titled “Action”](#action)
This is one of the biggest deviation from ARC-32, but provides a much simpler interface to describe and understand what any given method can do.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
The JSON schema defined in this ARC should be compatible with all ARC-4 clients, provided they don’t do any strict schema checking for extraneous fields.
## Test Cases
[Section titled “Test Cases”](#test-cases)
NA
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### Calculating cblock Offsets
[Section titled “Calculating cblock Offsets”](#calculating-cblock-offsets)
Below is an example of how to determine the TEAL/source line for a PC from an algod error message when the `pcOffsetMethod` is `cblocks`.
```ts
/** An ARC56 JSON file */
import arc56Json from "./arc56.json";
/** The bytecblock opcode */
const BYTE_CBLOCK = 38;
/** The intcblock opcode */
const INT_CBLOCK = 32;
/**
* Get the offset of the last constant block at the beginning of the program
* This value is used to calculate the program counter for an ARC56 program that has a pcOffsetMethod of "cblocks"
*
* @param program The program to parse
* @returns The PC value of the opcode after the last constant block
*/
function getConstantBlockOffset(program: Uint8Array) {
const bytes = [...program];
const programSize = bytes.length;
bytes.shift(); // remove version
/** The PC of the opcode after the bytecblock */
let bytecblockOffset: number | undefined;
/** The PC of the opcode after the intcblock */
let intcblockOffset: number | undefined;
while (bytes.length > 0) {
/** The current byte from the beginning of the byte array */
const byte = bytes.shift()!;
// If the byte is a constant block...
if (byte === BYTE_CBLOCK || byte === INT_CBLOCK) {
const isBytecblock = byte === BYTE_CBLOCK;
/** The byte following the opcode is the number of values in the constant block */
const valuesRemaining = bytes.shift()!;
// Iterate over all the values in the constant block
for (let i = 0; i < valuesRemaining; i++) {
if (isBytecblock) {
/** The byte following the opcode is the length of the next element */
const length = bytes.shift()!;
bytes.splice(0, length);
} else {
// intcblock is a uvarint, so we need to keep reading until we find the end (MSB is not set)
while ((bytes.shift()! & 0x80) !== 0) {
// Do nothing...
}
}
}
if (isBytecblock) bytecblockOffset = programSize - bytes.length - 1;
else intcblockOffset = programSize - bytes.length - 1;
if (bytes[0] !== BYTE_CBLOCK && bytes[0] !== INT_CBLOCK) {
// if the next opcode isn't a constant block, we're done
break;
}
}
}
return Math.max(bytecblockOffset ?? 0, intcblockOffset ?? 0);
}
/** The error message from algod */
const algodError =
"Network request error. Received status 400 (Bad Request): TransactionPool.Remember: transaction ZR2LAFLRQYFZFV6WVKAPH6CANJMIBLLH5WRTSWT5CJHFVMF4UIFA: logic eval error: assert failed pc=162. Details: app=11927, pc=162, opcodes=log; intc_0 // 0; assert";
/** The PC of the error */
const pc = Number(algodError.match(/pc=(\d+)/)![1]);
// Parse the ARC56 JSON to determine if the PC values are offset by the constant blocks
if (arc56Json.sourceInfo.approval.pcOffsetMethod === "cblocks") {
/** The program can either be cached locally OR retrieved via the algod API */
const program = new Uint8Array([
10, 32, 3, 0, 1, 6, 38, 3, 64, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48,
48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48,
48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48,
48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 32, 48, 48, 48,
48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48,
48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 3, 102, 111, 111, 40, 41, 34, 42,
49, 24, 20, 129, 6, 11, 49, 25, 8, 141, 12, 0, 85, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 71, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 136, 0, 3, 129, 1, 67, 138, 0,
0, 42, 176, 34, 68, 137, 136, 0, 3, 129, 1, 67, 138, 0, 0, 42, 40, 41, 132,
137, 136, 0, 3, 129, 1, 67, 138, 0, 0, 0, 137, 128, 4, 21, 31, 124, 117,
136, 0, 13, 73, 21, 22, 87, 6, 2, 76, 80, 80, 176, 129, 1, 67, 138, 0, 1,
34, 22, 137, 129, 1, 67, 128, 4, 184, 68, 123, 54, 54, 26, 0, 142, 1, 255,
240, 0, 128, 4, 154, 113, 210, 180, 128, 4, 223, 77, 92, 59, 128, 4, 61,
135, 13, 135, 128, 4, 188, 11, 23, 6, 54, 26, 0, 142, 4, 255, 135, 255, 149,
255, 163, 255, 174, 0,
]);
/** Get the offset of the last constant block */
const offset = getConstantBlockOffset(program);
/** Find the source info object that corresponds to the error's PC */
const sourceInfoObject = arc56Json.sourceInfo.approval.sourceInfo.find((s) =>
s.pc.includes(pc - offset)
)!;
/** Get the TEAL line and source line that corresponds to the error */
console.log(
`Error at PC ${pc} corresponds to TEAL line ${sourceInfoObject.teal} and source line ${sourceInfoObject.source}`
);
}
```
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The type values used in methods **MUST** be correct, because if they were not then the method would not be callable. For state, however, it is possible to have an incorrect type encoding defined. Any significant security concern from this possibility is not immediately evident, but it is worth considering.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
```plaintext
```
# ASA Inbox Router
> An application that can route ASAs to users or hold them to later be claimed
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this standard is to establish a standard in the Algorand ecosystem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA.
A wallet custodied by an application will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will be used to map inbox addresses to user address. This master application can route ASAs to users performing whatever actions are necessary.
If integrated into ecosystem technologies including wallets, explorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received.
## Motivation
[Section titled “Motivation”](#motivation)
Algorand requires accounts to opt in to receive any ASA, a fact which simultaneously:
1. Grants account holders fine-grained control over their holdings by allowing them to select which assets to allow and preventing receipt of unwanted tokens.
2. Frustrates users and developers when accounting for this requirement especially since other blockchains do not have this requirement.
This ARC lays out a new way to navigate the ASA opt in requirement.
### Contemplated Use Cases
[Section titled “Contemplated Use Cases”](#contemplated-use-cases)
The following use cases help explain how this capability can enhance the possibilities within the Algorand ecosystem.
#### Airdrops
[Section titled “Airdrops”](#airdrops)
An ASA creator who wants to send their asset to a set of accounts faces the challenge of needing their intended receivers to opt in to the ASA ahead of time, which requires non-trivial communication efforts and precludes the possibility of completing the airdrop as a surprise. This claimable ASA standard creates the ability to send an airdrop out to individual addresses so that the receivers can opt in and claim the asset at their convenience—or not, if they so choose.
#### Reducing New User On-boarding Friction
[Section titled “Reducing New User On-boarding Friction”](#reducing-new-user-on-boarding-friction)
An application operator who wants to on-board users to their game or business may want to reduce the friction of getting people started by decoupling their application on-boarding process from the process of funding a non-custodial Algorand wallet, if users are wholly new to the Algorand ecosystem. As long as the receiver’s address is known, an ASA can be sent to them ahead of them having ALGOs in their wallet to cover the minimum balance requirement and opt in to the asset.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Deployments
[Section titled “Deployments”](#deployments)
This ARC works best when there is a singleton deployment per network. Below are the app IDs for the canonical deployments:
| Network | App ID |
| ------- | ------------ |
| Mainnet | `2449590623` |
| Testnet | `643020148` |
### Router Contract [ARC-56](/arc-standards/arc-0056) JSON
[Section titled “Router Contract ARC-56 JSON”](#router-contract-arc-56-json)
```json
{
"name": "ARC59",
"desc": "",
"methods": [
{
"name": "createApplication",
"desc": "Deploy ARC59 contract",
"args": [],
"returns": {
"type": "void"
},
"actions": {
"create": ["NoOp"],
"call": []
}
},
{
"name": "arc59_optRouterIn",
"desc": "Opt the ARC59 router into the ASA. This is required before this app can be used to send the ASA to anyone.",
"args": [
{
"name": "asa",
"type": "uint64",
"desc": "The ASA to opt into"
}
],
"returns": {
"type": "void"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_getOrCreateInbox",
"desc": "Gets the existing inbox for the receiver or creates a new one if it does not exist",
"args": [
{
"name": "receiver",
"type": "address",
"desc": "The address to get or create the inbox for"
}
],
"returns": {
"type": "address",
"desc": "The inbox address"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_getSendAssetInfo",
"args": [
{
"name": "receiver",
"type": "address",
"desc": "The address to send the asset to"
},
{
"name": "asset",
"type": "uint64",
"desc": "The asset to send"
}
],
"returns": {
"type": "(uint64,uint64,bool,bool,uint64,uint64)",
"desc": "Returns the following information for sending an asset:\nThe number of itxns required, the MBR required, whether the router is opted in, whether the receiver is opted in,\nand how much ALGO the receiver would need to claim the asset",
"struct": "SendAssetInfo"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_sendAsset",
"desc": "Send an asset to the receiver",
"args": [
{
"name": "axfer",
"type": "axfer",
"desc": "The asset transfer to this app"
},
{
"name": "receiver",
"type": "address",
"desc": "The address to send the asset to"
},
{
"name": "additionalReceiverFunds",
"type": "uint64",
"desc": "The amount of ALGO to send to the receiver/inbox in addition to the MBR"
}
],
"returns": {
"type": "address",
"desc": "The address that the asset was sent to (either the receiver or their inbox)"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_claim",
"desc": "Claim an ASA from the inbox",
"args": [
{
"name": "asa",
"type": "uint64",
"desc": "The ASA to claim"
}
],
"returns": {
"type": "void"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_reject",
"desc": "Reject the ASA by closing it out to the ASA creator. Always sends two inner transactions.\nAll non-MBR ALGO balance in the inbox will be sent to the caller.",
"args": [
{
"name": "asa",
"type": "uint64",
"desc": "The ASA to reject"
}
],
"returns": {
"type": "void"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_getInbox",
"desc": "Get the inbox address for the given receiver",
"args": [
{
"name": "receiver",
"type": "address",
"desc": "The receiver to get the inbox for"
}
],
"returns": {
"type": "address",
"desc": "Zero address if the receiver does not yet have an inbox, otherwise the inbox address"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
},
{
"name": "arc59_claimAlgo",
"desc": "Claim any extra algo from the inbox",
"args": [],
"returns": {
"type": "void"
},
"actions": {
"create": [],
"call": ["NoOp"]
}
}
],
"arcs": [4, 56],
"structs": {
"SendAssetInfo": [
{
"name": "itxns",
"type": "uint64"
},
{
"name": "mbr",
"type": "uint64"
},
{
"name": "routerOptedIn",
"type": "bool"
},
{
"name": "receiverOptedIn",
"type": "bool"
},
{
"name": "receiverAlgoNeededForClaim",
"type": "uint64"
},
{
"name": "receiverAlgoNeededForWorstCaseClaim",
"type": "uint64"
}
]
},
"state": {
"schema": {
"global": {
"bytes": 0,
"ints": 0
},
"local": {
"bytes": 0,
"ints": 0
}
},
"keys": {
"global": {},
"local": {},
"box": {}
},
"maps": {
"global": {},
"local": {},
"box": {
"inboxes": {
"keyType": "address",
"valueType": "address"
}
}
}
},
"bareActions": {
"create": [],
"call": []
}
}
```
**NOTE:** This ARC-56 spec does not include the source information, including error mapping, because the deployment used a version of TEALScript to compile the contract prior to ARC-56 support.
### Sending an Asset
[Section titled “Sending an Asset”](#sending-an-asset)
When sending an asset, the sender **SHOULD** call `ARC59_getSendAssetInfo` to determine relevant information about the receiver and the router. This information is included as a tuple described below
| Index | Object Property | Description | Type |
| ----- | -------------------------- | -------------------------------------------------------------------------------- | ------ |
| 0 | itxns | The number of itxns required | uint64 |
| 1 | mbr | The amount of ALGO the sender **MUST** send the the router contract to cover MBR | uint64 |
| 2 | routerOptedIn | Whether the router is already opted in to the asset | bool |
| 3 | receiverOptedIn | Whether the receiver is already directly opted in to the asset | bool |
| 4 | receiverAlgoNeededForClaim | The amount of ALGO the receiver would currently need to claim the asset | uint64 |
This information can then be used to send the asset. An example of using this information to send an asset is shown in [the reference implementation section](#typescript-send-asset-function).
### Claiming an Asset
[Section titled “Claiming an Asset”](#claiming-an-asset)
When claiming an asset, the claimer **MUST** call `arc59_claim` to claim the asset from their inbox. This will transfer the asset to the claimer and any extra ALGO in the inbox will be sent to the claimer.
Prior to sending the `arc59_claim` app call, a call to `arc59_claimAlgo` **SHOULD** be made to claim any extra ALGO in the inbox if the inbox balance is above its minimum balance.
An example of claiming an asset is shown in [the reference implementation section](#typescript-claim-function).
## Rationale
[Section titled “Rationale”](#rationale)
This design was created to offer a standard mechanism by which wallets, explorers, and dapps could enable users to send, receive, and find claimable ASAs without requiring any changes to the core protocol.
This ARC is intended to replace [ARC-12](/arc-standards/arc-0012). This ARC is simpler than [ARC-12](/arc-standards/arc-0012), with the main feature lost being senders not getting back MBR. Given the significant reduction in complexity it is considered to be worth the tradeoff. No way to get back MBR is also another way to disincentivize spam.
### Rejection
[Section titled “Rejection”](#rejection)
The initial proposal for this ARC included a method for burning that leveraged [ARC-54](/arc-standards/arc-0054). After further consideration though it was decided to remove the burn functionality with a reject method. The reject method does not burn the ASA. It simply closes out to the creator. This decision was made to reduce the additional complexity and potential user friction that [ARC-54](/arc-standards/arc-0054) opt-ins introduced.
### Router MBR
[Section titled “Router MBR”](#router-mbr)
It should be noted that the MBR for the router contract itself is non-recoverable. This was an intentional decision that results in more predictable costs for assets that may freuqently be sent through the router, such as stablecoins.
## Test Cases
[Section titled “Test Cases”](#test-cases)
Test cases for the JavaScript client and the [ARC-59](/arc-standards/arc-0059) smart contract implementation can be found [here](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0059/__test__/)
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
A project with a the full reference implementation, including the smart contract and JavaScript library (used for testing), can be found [here](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0059/).
### Router Contract
[Section titled “Router Contract”](#router-contract)
This contract is written using TEALScript v0.90.3
```ts
/* eslint-disable max-classes-per-file */
// eslint-disable-next-line import/no-unresolved, import/extensions
import { Contract } from "@algorandfoundation/tealscript";
type SendAssetInfo = {
/**
* The total number of inner transactions required to send the asset through the router.
* This should be used to add extra fees to the app call
*/
itxns: uint64;
/** The total MBR the router needs to send the asset through the router. */
mbr: uint64;
/** Whether the router is already opted in to the asset or not */
routerOptedIn: boolean;
/** Whether the receiver is already directly opted in to the asset or not */
receiverOptedIn: boolean;
/** The amount of ALGO the receiver would currently need to claim the asset */
receiverAlgoNeededForClaim: uint64;
};
class ControlledAddress extends Contract {
@allow.create("DeleteApplication")
new(): Address {
sendPayment({
rekeyTo: this.txn.sender,
});
return this.app.address;
}
}
export class ARC59 extends Contract {
inboxes = BoxMap();
/**
* Deploy ARC59 contract
*
*/
createApplication(): void {}
/**
* Opt the ARC59 router into the ASA. This is required before this app can be used to send the ASA to anyone.
*
* @param asa The ASA to opt into
*/
arc59_optRouterIn(asa: AssetID): void {
sendAssetTransfer({
assetReceiver: this.app.address,
assetAmount: 0,
xferAsset: asa,
});
}
/**
* Gets the existing inbox for the receiver or creates a new one if it does not exist
*
* @param receiver The address to get or create the inbox for
* @returns The inbox address
*/
arc59_getOrCreateInbox(receiver: Address): Address {
if (this.inboxes(receiver).exists) return this.inboxes(receiver).value;
const inbox = sendMethodCall({
onCompletion: OnCompletion.DeleteApplication,
approvalProgram: ControlledAddress.approvalProgram(),
clearStateProgram: ControlledAddress.clearProgram(),
});
this.inboxes(receiver).value = inbox;
return inbox;
}
/**
*
* @param receiver The address to send the asset to
* @param asset The asset to send
*
* @returns Returns the following information for sending an asset:
* The number of itxns required, the MBR required, whether the router is opted in, whether the receiver is opted in,
* and how much ALGO the receiver would need to claim the asset
*/
arc59_getSendAssetInfo(receiver: Address, asset: AssetID): SendAssetInfo {
const routerOptedIn = this.app.address.isOptedInToAsset(asset);
const receiverOptedIn = receiver.isOptedInToAsset(asset);
const info: SendAssetInfo = {
itxns: 1,
mbr: 0,
routerOptedIn: routerOptedIn,
receiverOptedIn: receiverOptedIn,
receiverAlgoNeededForClaim: 0,
};
if (receiverOptedIn) return info;
const algoNeededToClaim =
receiver.minBalance + globals.assetOptInMinBalance + globals.minTxnFee;
// Determine how much ALGO the receiver needs to claim the asset
if (receiver.balance < algoNeededToClaim) {
info.receiverAlgoNeededForClaim += algoNeededToClaim - receiver.balance;
}
// Add mbr and transaction for opting the router in
if (!routerOptedIn) {
info.mbr += globals.assetOptInMinBalance;
info.itxns += 1;
}
if (!this.inboxes(receiver).exists) {
// Two itxns to create inbox (create + rekey)
// One itxns to send MBR
// One itxn to opt in
info.itxns += 4;
// Calculate the MBR for the inbox box
const preMBR = globals.currentApplicationAddress.minBalance;
this.inboxes(receiver).value = globals.zeroAddress;
const boxMbrDelta = globals.currentApplicationAddress.minBalance - preMBR;
this.inboxes(receiver).delete();
// MBR = MBR for the box + min balance for the inbox + ASA MBR
info.mbr +=
boxMbrDelta + globals.minBalance + globals.assetOptInMinBalance;
return info;
}
const inbox = this.inboxes(receiver).value;
if (!inbox.isOptedInToAsset(asset)) {
// One itxn to opt in
info.itxns += 1;
if (!(inbox.balance >= inbox.minBalance + globals.assetOptInMinBalance)) {
// One itxn to send MBR
info.itxns += 1;
// MBR = ASA MBR
info.mbr += globals.assetOptInMinBalance;
}
}
return info;
}
/**
* Send an asset to the receiver
*
* @param receiver The address to send the asset to
* @param axfer The asset transfer to this app
* @param additionalReceiverFunds The amount of ALGO to send to the receiver/inbox in addition to the MBR
*
* @returns The address that the asset was sent to (either the receiver or their inbox)
*/
arc59_sendAsset(
axfer: AssetTransferTxn,
receiver: Address,
additionalReceiverFunds: uint64
): Address {
verifyAssetTransferTxn(axfer, {
assetReceiver: this.app.address,
});
// If the receiver is opted in, send directly to their account
if (receiver.isOptedInToAsset(axfer.xferAsset)) {
sendAssetTransfer({
assetReceiver: receiver,
assetAmount: axfer.assetAmount,
xferAsset: axfer.xferAsset,
});
if (additionalReceiverFunds !== 0) {
sendPayment({
receiver: receiver,
amount: additionalReceiverFunds,
});
}
return receiver;
}
const inboxExisted = this.inboxes(receiver).exists;
const inbox = this.arc59_getOrCreateInbox(receiver);
if (additionalReceiverFunds !== 0) {
sendPayment({
receiver: inbox,
amount: additionalReceiverFunds,
});
}
if (!inbox.isOptedInToAsset(axfer.xferAsset)) {
let inboxMbrDelta = globals.assetOptInMinBalance;
if (!inboxExisted) inboxMbrDelta += globals.minBalance;
// Ensure the inbox has enough balance to opt in
if (inbox.balance < inbox.minBalance + inboxMbrDelta) {
sendPayment({
receiver: inbox,
amount: inboxMbrDelta,
});
}
// Opt the inbox in
sendAssetTransfer({
sender: inbox,
assetReceiver: inbox,
assetAmount: 0,
xferAsset: axfer.xferAsset,
});
}
// Transfer the asset to the inbox
sendAssetTransfer({
assetReceiver: inbox,
assetAmount: axfer.assetAmount,
xferAsset: axfer.xferAsset,
});
return inbox;
}
/**
* Claim an ASA from the inbox
*
* @param asa The ASA to claim
*/
arc59_claim(asa: AssetID): void {
const inbox = this.inboxes(this.txn.sender).value;
sendAssetTransfer({
sender: inbox,
assetReceiver: this.txn.sender,
assetAmount: inbox.assetBalance(asa),
xferAsset: asa,
assetCloseTo: this.txn.sender,
});
sendPayment({
sender: inbox,
receiver: this.txn.sender,
amount: inbox.balance - inbox.minBalance,
});
}
/**
* Reject the ASA by closing it out to the ASA creator. Always sends two inner transactions.
* All non-MBR ALGO balance in the inbox will be sent to the caller.
*
* @param asa The ASA to reject
*/
arc59_reject(asa: AssetID) {
const inbox = this.inboxes(this.txn.sender).value;
sendAssetTransfer({
sender: inbox,
assetReceiver: asa.creator,
assetAmount: inbox.assetBalance(asa),
xferAsset: asa,
assetCloseTo: asa.creator,
});
sendPayment({
sender: inbox,
receiver: this.txn.sender,
amount: inbox.balance - inbox.minBalance,
});
}
/**
* Get the inbox address for the given receiver
*
* @param receiver The receiver to get the inbox for
*
* @returns Zero address if the receiver does not yet have an inbox, otherwise the inbox address
*/
arc59_getInbox(receiver: Address): Address {
return this.inboxes(receiver).exists
? this.inboxes(receiver).value
: globals.zeroAddress;
}
/** Claim any extra algo from the inbox */
arc59_claimAlgo() {
const inbox = this.inboxes(this.txn.sender).value;
assert(inbox.balance - inbox.minBalance !== 0);
sendPayment({
sender: inbox,
receiver: this.txn.sender,
amount: inbox.balance - inbox.minBalance,
});
}
}
```
### TypeScript Send Asset Function
[Section titled “TypeScript Send Asset Function”](#typescript-send-asset-function)
```ts
/**
* Send an asset to a receiver using the ARC59 router
*
* @param appClient The ARC59 client generated by algokit
* @param assetId The ID of the asset to send
* @param sender The address of the sender
* @param receiver The address of the receiver
* @param algorand The AlgorandClient instance to use to send transactions
* @param sendAlgoForNewAccount Whether to send 201_000 uALGO to the receiver so they can claim the asset with a 0-ALGO balance
*/
async function arc59SendAsset(
appClient: Arc59Client,
assetId: bigint,
sender: string,
receiver: string,
algorand: algokit.AlgorandClient
) {
// Get the address of the ARC59 router
const arc59RouterAddress = (await appClient.appClient.getAppReference())
.appAddress;
// Call arc59GetSendAssetInfo to get the following:
// itxns - The number of transactions needed to send the asset
// mbr - The minimum balance that must be sent to the router
// routerOptedIn - Whether the router has opted in to the asset
// receiverOptedIn - Whether the receiver has opted in to the asset
const [
itxns,
mbr,
routerOptedIn,
receiverOptedIn,
receiverAlgoNeededForClaim,
] = (await appClient.arc59GetSendAssetInfo({ asset: assetId, receiver }))
.return!;
// If the receiver has opted in, just send the asset directly
if (receiverOptedIn) {
await algorand.send.assetTransfer({
sender,
receiver,
assetId,
amount: 1n,
});
return;
}
// Create a composer to form an atomic transaction group
const composer = appClient.compose();
const signer = algorand.account.getSigner(sender);
// If the MBR is non-zero, send the MBR to the router
if (mbr || receiverAlgoNeededForClaim) {
const mbrPayment = await algorand.transactions.payment({
sender,
receiver: arc59RouterAddress,
amount: algokit.microAlgos(Number(mbr + receiverAlgoNeededForClaim)),
});
composer.addTransaction({ txn: mbrPayment, signer });
}
// If the router is not opted in, add a call to arc59OptRouterIn to do so
if (!routerOptedIn) composer.arc59OptRouterIn({ asa: assetId });
/** The box of the receiver's pubkey will always be needed */
const boxes = [algosdk.decodeAddress(receiver).publicKey];
/** The address of the receiver's inbox */
const inboxAddress = (
await appClient.compose().arc59GetInbox({ receiver }, { boxes }).simulate()
).returns[0];
// The transfer of the asset to the router
const axfer = await algorand.transactions.assetTransfer({
sender,
receiver: arc59RouterAddress,
assetId,
amount: 1n,
});
// An extra itxn is if we are also sending ALGO for the receiver claim
const totalItxns = itxns + (receiverAlgoNeededForClaim === 0n ? 0n : 1n);
composer.arc59SendAsset(
{ axfer, receiver, additionalReceiverFunds: receiverAlgoNeededForClaim },
{
sendParams: { fee: algokit.microAlgos(1000 + 1000 * Number(totalItxns)) },
boxes, // The receiver's pubkey
// Always good to include both accounts here, even if we think only the receiver is needed. This is to help protect against race conditions within a block.
accounts: [receiver, inboxAddress],
// Even though the asset is available in the group, we need to explicitly define it here because we will be checking the asset balance of the receiver
assets: [Number(assetId)],
}
);
// Disable resource population to ensure that our manually defined resources are correct
algokit.Config.configure({ populateAppCallResources: false });
// Send the transaction group
await composer.execute();
// Re-enable resource population
algokit.Config.configure({ populateAppCallResources: true });
}
```
### TypeScript Claim Function
[Section titled “TypeScript Claim Function”](#typescript-claim-function)
```ts
/**
* Claim an asset from the ARC59 inbox
*
* @param appClient The ARC59 client generated by algokit
* @param assetId The ID of the asset to claim
* @param claimer The address of the account claiming the asset
* @param algorand The AlgorandClient instance to use to send transactions
*/
async function arc59Claim(
appClient: Arc59Client,
assetId: bigint,
claimer: string,
algorand: algokit.AlgorandClient
) {
const composer = appClient.compose();
// Check if the claimer has opted in to the asset
let claimerOptedIn = false;
try {
await algorand.account.getAssetInformation(claimer, assetId);
claimerOptedIn = true;
} catch (e) {
// Do nothing
}
const inbox = (
await appClient
.compose()
.arc59GetInbox({ receiver: claimer })
.simulate({ allowUnnamedResources: true })
).returns[0];
let totalTxns = 3;
// If the inbox has extra ALGO, claim it
const inboxInfo = await algorand.account.getInformation(inbox);
if (inboxInfo.minBalance < inboxInfo.amount) {
totalTxns += 2;
composer.arc59ClaimAlgo(
{},
{
sender: algorand.account.getAccount(claimer),
sendParams: { fee: algokit.algos(0) },
}
);
}
// If the claimer hasn't already opted in, add a transaction to do so
if (!claimerOptedIn) {
composer.addTransaction({
txn: await algorand.transactions.assetOptIn({ assetId, sender: claimer }),
signer: algorand.account.getSigner(claimer),
});
}
composer.arc59Claim(
{ asa: assetId },
{
sender: algorand.account.getAccount(claimer),
sendParams: { fee: algokit.microAlgos(1000 * totalTxns) },
}
);
await composer.execute();
}
```
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The router application controls all user inboxes. If this contract is compromised, user assets might also be compromised.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Wallet Arbitrary Signing API
> API function for signing data
## Abstract
[Section titled “Abstract”](#abstract)
This ARC proposes a standard for arbitrary data signing. It is designed to be a simple and flexible standard that can be used in a wide variety of applications.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> Comments like this are non-normative
## Rationale
[Section titled “Rationale”](#rationale)
Signing data is a common and critical operation. Users may need to sign data for multiple reasons (e.g. delegate signatures, DIDs, signing documents, authentication).
Algorand wallets need a standard approach to byte signing to unlock self-custodial services and protect users from malicious and attack-prone signing workflows.
This ARC provides a standard API for bytes signing. The API encodes byte arrays to be signed into well-structured JSON schemas together with additional metadata. It requires wallets to validate the signing inputs, notify users about what they are signing and warn them in case of dangerous signing requests.
### Overview
[Section titled “Overview”](#overview)
This ARC defines a function `signData(signingData, metadata)` for signing data.
`signingData` is a `StdSigData` object composed of the signing `data` that instantiates a known JSON Schema and the `signer`’s public key.
### Signing Flow
[Section titled “Signing Flow”](#signing-flow)
When connected to a specific `domain` (i.e app or other identifier), the wallet will receive a request to sign some `data` along side some `authenticatorData`, which will look like some random bytes. With this information, the wallet should follow the following steps:
1. Hash the `data` field with `sha256`.
2. Knowing to what `domain` we are connected to, hash such value with `sha256` and compare it with the first 32 bytes of `authenticatorData`. 2.1. If the hashes do not match, the wallet **MUST** return an error.
3. Append the `authenticatorData` to the resulting hash of the `data` field.
4. Sign the result
### `Scopes`
[Section titled “Scopes”](#scopes)
Supported scopes are:
* `AUTH` (1): This scope is used for authentication purposes. It is used to sign data that will be used to authenticate the user to a specific domain. The `data` field **MUST** be a JSON object that represents the content to be signed. The `authenticatorData` field **MUST** include, at least, the `sha256` hash of the `domain` requesting a signature. The wallet **MUST** do an integrity check on the first 32 bytes of `authenticatorData` to match the hash. The `hdPath` field is **optional** and **MUST** be a BIP44 path in order to derive the private key to sign the `data`. The wallet **MUST** validate the path before signing.
Summarized signing process for `AUTH` scope:
```plaintext
EdDSA(SHA256(data) + SHA256(authenticatorData))
```
* **`note`**: Other scopes could be added in the future.
#### Parameters
[Section titled “Parameters”](#parameters)
##### `StdSigData`
[Section titled “StdSigData”](#stdsigdata)
Must be a JSON object with the following properties:
| Field | Type | Description |
| ------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data` | `string` | string representing the content to be signed for the specific `Scope`. This can be an encoded JSON object or any other data. It **MUST** be presented to the user in a human-readable format. |
| `signer` | `bytes` | public key of the signer. This can the public related to an Algorand address or any other Ed25519 public key. |
| `domain` | `string` | This is the domain requesting the signature. It can be a URL, a DID, or any other identifier. It **MUST** be presented to the user to inform them about the context of the signature. |
| `requestId` | `string` | This field is **optional**. It is used to identify the request. It **MUST** be unique for each request. |
| `authenticatorData` | `bytes` | It **MUST** include, at least, the `sha256` hash of the `domain` requesting a signature. The wallet **MUST** do an integrity check on the first 32 bytes of `authenticatorData` to match the hash. It **MAY** also include signature counters, network flags or any other unique data to prevent replay attacks or to trick user to sign unrelated data to the scope. The wallet **SHOULD** validate every field in `authenticatorData` before signing. Each `Scope` **MUST** specify if `authenticatorData` should be appended to the hash of the `data` before signing. |
| `hdPath` | `string` | This field is **optional**. It is required if the wallet supports BIP39 / BIP32 / BIP44. This field **MUST** be a BIP44 path in order to derive the private key to sign the `data`. The wallet **MUST** validate the path before signing. |
##### `metadata`
[Section titled “metadata”](#metadata)
Must be a JSON object with the following properties:
| Field | Type | Description |
| ---------- | --------- | -------------------------------------------------------------------------------------------- |
| `scope` | `integer` | Defines the purpose of the signature. It **MUST** be one of the following values: `1` (AUTH) |
| `encoding` | `string` | Defines the encoding of the `data` field. `base64` is the recommended encoding. |
##### `authenticatorData`
[Section titled “authenticatorData”](#authenticatordata)
| Name | Length | Description | optional |
| ------------------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- |
| `rpIdHash` | 32 bytes | SHA256 hash of the domain requesting the signature. | No |
| `flags` | 1 byte | Flags (bit 0 is the least significant bit): - 0x01: User Present (UP) - 0 means the user is not present. Bit 1 Reserved for future use (RFU1). Bit 2 User Verified (UV) result. - 1 means user is verified. - 0 means user is not verified. Bits 3 - 5 Reserved for future use (RFU2). Bit 6: Attested credential data included (AT). - Indicates whether the authenticator added attested credential data. Bit 7: Extension data included (ED). - Indicates whether the authenticator added extension data. | yes |
| `signCount` | 4 bytes | Signature counter. - This is a monotonically increasing counter that is incremented each time the user successfully authenticates. - The counter is reset to 0 when the authenticator is reset. - The counter is used to prevent replay attacks. | Yes |
| `attestedCredentialData` | variable | attested credential data (if present). See [Specification](https://www.w3.org/TR/webauthn-2/#sctn-attested-credential-data) | Yes |
| `extensions` | variable | extension data (if present), is a key value JSON structure that may or may not be included. See [Specification](https://www.w3.org/TR/webauthn-2/#sctn-extensions) for full details | Yes |
This follows the FIDO WebAuthn specification for the `authenticatorData` field. The wallet **MUST** validate the `authenticatorData` field before signing. For more information on the `authenticatorData` field, please refer to the [WebAuthn specification](https://www.w3.org/TR/webauthn-2/#authenticator-data).
##### `Errors`
[Section titled “Errors”](#errors)
These are the possible errors that the wallet **MUST** handle:
| Error | Description |
| ---------------------------------- | ---------------------------------------------------------------------- |
| `ERROR_INVALID_SCOPE` | The `scope` is not valid. |
| `ERROR_FAILED_DECODING` | The `data` field could not be decoded. |
| `ERROR_INVALID_SIGNER` | Unable to find in the wallet the public key related to the signer. |
| `ERROR_MISSING_DOMAIN` | The `domain` field is missing. |
| `ERROR_MISSING_AUTHENTICATED_DATA` | The `authenticatorData` field is missing. |
| `ERROR_BAD_JSON` | The `data` field is not a valid JSON object. |
| `ERROR_FAILED_DOMAIN_AUTH` | The `authenticatorData` field does not match the hash of the `domain`. |
| `ERROR_FAILED_HD_PATH` | The `hdPath` field is not a valid BIP44 path. |
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
N / A
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
Available in the `assets/arc-0060` folder.
### Sample Use cases
[Section titled “Sample Use cases”](#sample-use-cases)
#### Generic AUTH
[Section titled “Generic AUTH”](#generic-auth)
```ts
const authData: Uint8Array = new Uint8Array(createHash('sha256').update("arc60.io").digest())
const authRequest: StdSigData = {
data: Buffer.from("{[jsonfields....]}").toString('base64'),
signer: publicKey,
domain: "arc60.io",
requestId: Buffer.from(randomBytes(32)).toString('base64'),
authenticationData: authData,
hdPath: "m/44'/60'/0'/0/0"
}
const signResponse = await arc60wallet.signData(authRequest, { scope: ScopeType.AUTH, encoding: 'base64' })
```
#### CAIP-122
[Section titled “CAIP-122”](#caip-122)
```ts
const caip122Request: CAIP122 = {
domain: "arc60.io",
chain_id: "283",
account_address: ...
type: "ed25519",
statement: "We are requesting you to sign this message to authenticate to arc60.io",
uri: "https://arc60.io",
version: "1",
nonce: Buffer.from(randomBytes(32)).toString,
...
}
// Disply message title according EIP-4361
const msgTitle: string = `Sign this message to authenticate to ${caip122Request.domain} with account ${caip122Request.account_address}`
// Display message body according EIP-4361
const msgBodyPlaceHolders: string = `URI: ${caip122Request.uri}\n` + `Chain ID: ${caip122Request.chain_id}\n`
+ `Type: ${caip122Request.type}\n`
+ `Nonce: ${caip122Request.nonce}\n`
+ `Statement: ${caip122Request.statement}\n`
+ `Expiration Time: ${caip122Request["expiration-time"]}\n`
+ `Not Before: ${caip122Request["not-before"]}\n`
+ `Issued At: ${caip122Request["issued-at"]}\n`
+ `Resources: ${(caip122Request.resources ?? []).join(' , \n')}\n`
// Display message according EIP-4361
const msg: string = `${msgTitle}\n\n${msgBodyPlaceHolders}`
console.log(msg)
// authenticationData
const authenticationData: Uint8Array = new Uint8Array(createHash('sha256').update(caip122Request.domain).digest())
const signData: StdSigData = {
data: Buffer.from(JSON.stringify(caip122Request)).toString('base64'),
signer: publicKey,
domain: caip122Request.domain, // should be same as origin / authenticationData
// random unique id, to help RP / Client match requests
requestId: Buffer.from(randomBytes(32)).toString('base64'),
authenticationData: authenticationData
}
const signResponse = await arc60wallet.signData(signData, { scope: ScopeType.AUTH, encoding: 'base64' })
expect(signResponse).toBeDefined()
// reply
```
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Wallets are free to make their own UX choices, but they **SHOULD** show the user the purpose (i.e. `scope`) of the signature, the domain that is requesting the signature, and the data that is being signed. This is to prevent users from signing data that they do not understand.
Additionally, wallets **MUST** show to the user the data that is being signed in a human-readable format, as well as the authenticatorData and how it was calculated, so that the hash can be verified by the user when signing with ledger for example.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# ASA Circulating Supply
> Getter method for ASA circulating supply
## Abstract
[Section titled “Abstract”](#abstract)
This ARC introduces a standard for the definition of circulating supply for Algorand Standard Assets (ASA) and its client-side retrieval. A reference implementation is suggested.
## Motivation
[Section titled “Motivation”](#motivation)
Algorand Standard Asset (ASA) `total` supply is *defined* upon ASA creation.
Creating an ASA on the ledger *does not* imply its `total` supply is immediately “minted” or “circulating”. In fact, the semantic of token “minting” on Algorand is slightly different from other blockchains: it is not coincident with the token units creation on the ledger.
The Reserve Address, one of the 4 addresses of ASA Role-Based-Access-Control (RBAC), is conventionally used to identify the portion of `total` supply not yet in circulation. The Reserve Address has no “privilege” over the token: it is just a “logical” label used (client-side) to classify an existing amount of ASA as “not in circulation”.
According to this convention, “minting” an amount of ASA units is equivalent to *moving that amount out of the Reserve Address*.
> ASA may have the Reserve Address assigned to a Smart Contract to enforce specific “minting” policies, if needed.
This convention led to a simple and unsophisticated semantic of ASA circulating supply, widely adopted by clients (wallets, explorers, etc.) to provide standard information:
```text
circulating_supply = total - reserve_balance
```
Where `reserve_balance` is the ASA balance hold by the Reserve Address.
However, the simplicity of such convention, who fostered adoption across the Algorand ecosystem, poses some limitations. Complex and sophisticated use-cases of ASA, such as regulated stable-coins and tokenized securities among others, require more detailed and expressive definitions of circulating supply.
As an example, an ASA could have “burned”, “locked” or “pre-minted” amounts of token, not held in the Reserve Address, which *should not* be considered as “circulating” supply. This is not possible with the basic ASA protocol convention.
This ARC proposes a standard ABI *read-only* method (getter) to provide the circulating supply of an ASA.
## Specification
[Section titled “Specification”](#specification)
The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
> Notes like this are non-normative.
### ABI Method
[Section titled “ABI Method”](#abi-method)
A compliant ASA, whose circulating supply definition conforms to this ARC, **MUST** implement the following method on an Application (referred as *Circulating Supply App* in this specification):
```json
{
"name": "arc62_get_circulating_supply",
"readonly": true,
"args": [
{
"type": "uint64",
"name": "asset_id",
"desc": "ASA ID of the circulating supply"
}
],
"returns": {
"type": "uint64",
"desc": "ASA circulating supply"
},
"desc": "Get ASA circulating supply"
}
```
The `arc62_get_circulating_supply` **MUST** be a *read-only* ([ARC-22](/arc-standards/arc-0022)) method (getter).
### Usage
[Section titled “Usage”](#usage)
Getter calls **SHOULD** be *simulated*.
Any external resources used by the implementation **SHOULD** be discovered and auto-populated by the simulated getter call.
#### Example 1
[Section titled “Example 1”](#example-1)
> Let the ASA have `total` supply and a Reserve Address (i.e. not set to `ZeroAddress`).
>
> Let the Reserve Address be assigned to an account different from the Circulating Supply App Account.
>
> Let `burned` be an external Burned Address dedicated to ASA burned supply.
>
> Let `locked` be an external Locked Address dedicated to ASA locked supply.
>
> The ASA issuer defines the *circulating supply* as:
>
> ```text
> circulating_supply = total - reserve_balance - burned_balance - locked_balance
> ```
>
> In this case the simulated read-only method call would auto-populate 1 external reference for the ASA and 3 external reference accounts (Reserve, Burned and Locked).
#### Example 2
[Section titled “Example 2”](#example-2)
> Let the ASA have `total` supply and *no* Reserve Address (i.e. set to `ZeroAddress`).
>
> Let `non_circulating_amount` be a UInt64 Global Var defined by the implementation of the Circulating Supply App.
>
> The ASA issuer defines the *circulating supply* as:
>
> ```text
> circulating_supply = total - non_circulating_amount
> ```
>
> In this case the simulated read-only method call would auto-populate just 1 external reference for the ASA.
### Circulating Supply Application discovery
[Section titled “Circulating Supply Application discovery”](#circulating-supply-application-discovery)
> Given an ASA ID, clients (wallet, explorer, etc.) need to discover the related Circulating Supply App.
An ASA conforming to this ARC **MUST** specify the Circulating Supply App ID.
> To avoid ecosystem fragmentation, this ARC does not propose any new method to specify the metadata of an ASA. Instead, it only extends already existing standards.
If the ASA also conforms to any ARC that supports additional `properties` ([ARC-3](/arc-standards/arc-0003), [ARC-19](/arc-standards/arc-0019), etc.) as metadata declared in the ASA URL field, then it **MUST** include a `arc-62` key and set the corresponding value to a map, including the ID of the Circulating Supply App as a value for the key `application-id`.
#### Example: ARC-3 Property
[Section titled “Example: ARC-3 Property”](#example-arc-3-property)
```json
{
//...
"properties": {
//...
"arc-62": {
"application-id": 123
}
}
//...
}
```
## Rationale
[Section titled “Rationale”](#rationale)
The definition of *circulating supply* for sophisticated use-cases is usually ASA-specific. It could involve, for example, complex math or external accounts’ balances, variables stored in boxes or in global state, etc..
For this reason, the proposed method’s signature does not require any reference to external resources, a part form the `asset_id` of the ASA for which the circulating supply is defined.
Eventual external resources can be discovered and auto-populated directly by the simulated method call.
The rational of this design choice is avoiding fragmentation and integration overhead for clients (wallets, explorers, etc.).
Clients just need to know:
1. The ASA ID;
2. The Circulating Supply App ID implementing the `arc62_get_circulating_supply` method for that ASA.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
Existing ASA willing to conform to this ARC **MUST** specify the Circulating Supply App ID as [ARC-2](/arc-standards/arc-0002) `AssetConfig` transaction note field, as follows:
* The `` **MUST** be equal to `62`;
* The **RECOMMENDED** `` are [MsgPack](https://msgpack.org/) (`m`) or [JSON](https://www.json.org/json-en.html) (`j`);
* The `` **MUST** specify `application-id` equal to the Circulating Supply App ID.
> **WARNING**: To preserve the existing ASA RBAC (e.g. Manager Address, Freeze Address, etc.) it is necessary to **include all the existing role addresses** in the `AssetConfig`. Not doing so would irreversibly disable the RBAC roles!
### Example - JSON without version
[Section titled “Example - JSON without version”](#example---json-without-version)
```text
arc62:j{"application-id":123}
```
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
> This section is non-normative.
This section suggests a reference implementation of the Circulating Supply App.
An Algorand-Python example is available [here](https://github.com/algorandfoundation/ARCs/tree/main/assets/arc-0062).
### Recommendations
[Section titled “Recommendations”](#recommendations)
An ASA using the reference implementation **SHOULD NOT** assign the Reserve Address to the Circulating Supply App Account.
A reference implementation **SHOULD** target a version of the AVM that supports foreign resources pooling (version 9 or greater).
A reference implementation **SHOULD** use 3 external addresses, in addition to the Reserve Address, to define the not circulating supply.
> ⚠️The specification *is not limited* to 3 external addresses. The implementations **MAY** extend the non-circulating labels using more addresses, global storage, box storage, etc.
The **RECOMMENDED** labels for not-circulating balances are: `burned`, `locked` and `generic`.
> To change the labels of not circulating addresses is sufficient to rename the following constants just in `smart_contracts/circulating_supply/config.py`:
>
> ```python
> NOT_CIRCULATING_LABEL_1: Final[str] = "burned"
> NOT_CIRCULATING_LABEL_2: Final[str] = "locked"
> NOT_CIRCULATING_LABEL_3: Final[str] = "generic"
> ```
### State Schema
[Section titled “State Schema”](#state-schema)
A reference implementation **SHOULD** allocate, at least, the following Global State variables:
* `asset_id` as UInt64, initialized to `0` and set **only once** by the ASA Manager Address;
* Not circulating address 1 (`burned`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address;
* Not circulating address 2 (`locked`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address;
* Not circulating address 3 (`generic`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address.
A reference implementation **SHOULD** enforce that, upon setting `burned`, `locked` and `generic` addresses, the latter already opted-in the `asset_id`.
```json
"state": {
"global": {
"num_byte_slices": 3,
"num_uints": 1
},
"local": {
"num_byte_slices": 0,
"num_uints": 0
}
},
"schema": {
"global": {
"declared": {
"asset_id": {
"type": "uint64",
"key": "asset_id"
},
"not_circulating_label_1": {
"type": "bytes",
"key": "burned"
},
"not_circulating_label_2": {
"type": "bytes",
"key": "locked"
},
"not_circulating_label_3": {
"type": "bytes",
"key": "generic"
}
},
"reserved": {}
},
"local": {
"declared": {},
"reserved": {}
}
},
```
### Circulating Supply Getter
[Section titled “Circulating Supply Getter”](#circulating-supply-getter)
A reference implementation **SHOULD** enforce that the `asset_id` Global Variable is equal to the `asset_id` argument of the `arc62_get_circulating_supply` getter method.
> Alternatively the reference implementation could ignore the `asset_id` argument and use directly the `asset_id` Global Variable.
A reference implementation **SHOULD** return the ASA *circulating supply* as:
```text
circulating_supply = total - reserve_balance - burned_balance - locked_balance - generic_balance
```
Where:
* `total` is the total supply of the ASA (`asset_id`);
* `reserve_balance` is the ASA balance hold by the Reserve Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`;
* `burned_balance` is the ASA balance hold by the Burned Address or `0` if the address is set to the Global `ZeroAddress` or is not opted-in `asset_id`;
* `locked_balance` is the ASA balance hold by the Locked Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`;
* `generic_balance` is the ASA balance hold by a Generic Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`.
> ⚠️The implementations **MAY** extend the calculation of `circulating_supply` using global storage, box storage, etc. See [Example 2](/arc-standards/arc-0062#example-2) for reference.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Permissions over the Circulating Supply App setting and update **SHOULD** be granted to the ASA Manager Address.
> The ASA trust-model (i.e. who sets the Reserve Address) is extended to the generalized ASA circulating supply definition.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# AVM Run Time Errors In Program
> Informative AVM run time errors based on program bytecode
## Abstract
[Section titled “Abstract”](#abstract)
This document introduces a convention for rising informative run time errors on the Algorand Virtual Machine (AVM) directly from the program bytecode.
## Motivation
[Section titled “Motivation”](#motivation)
The AVM does not offer native opcodes to catch and raise run time errors.
The lack of native error handling semantics could lead to fragmentation of tooling and frictions for AVM clients, who are unable to retrieve informative and useful hints about the occurred run time failures.
This ARC formalizes a convention to rise AVM run time errors based just on the program bytecode.
## Specification
[Section titled “Specification”](#specification)
The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
> Notes like this are non-normative.
### Error format
[Section titled “Error format”](#error-format)
> The AVM programs bytecode have limited sized. In this convention, the errors are part of the bytecode, therefore it is good to mind errors’ formatting and sizing.
> Errors consist of a *code* and an optional *short message*.
Errors **MUST** be prefixed either with:
* `ERR:` for custom errors;
* `AER:` reserved for future ARC standard errors.
Errors **MUST** use `:` as domain separator.
It is **RECOMMENDED** to use `UTF-8` for the error bytes string encoding.
It is **RECOMMENDED** to use *short* error messages.
It is **RECOMMENDED** to use [camel case](https://en.wikipedia.org/wiki/Camel_case/) for alphanumeric error codes.
It is **RECOMMENDED** to avoid error byte strings of *exactly* 8 or 32 bytes.
### In Program Errors
[Section titled “In Program Errors”](#in-program-errors)
When a program wants to emit informative run time errors, directly from the bytecode, it **MUST**:
1. Push to the stack the bytes string containing the error;
2. Execute the `log` opcode to use the bytes from the top of the stack;
3. Execute the `err` opcode to immediately terminate the program.
Upon a program run time failure, the Algod API response contains both the failed *program counter* (`pc`) and the `logs` array with the *errors*.
The program **MAY** return multiple errors in the same failed execution.
The errors **MUST** be retrieved by:
1. Decoding the `base64` elements of the `logs` array;
2. Validating the decoded elements against the error regexp.
### Error examples
[Section titled “Error examples”](#error-examples)
> Error conforming this specification are always prefixed with `ERR:`.
Error with a *numeric code*: `ERR:042`.
Error with an *alphanumeric code*: `ERR:BadRequest`.
Error with a *numeric code* and *short message*: `ERR:042:AFunnyError`.
### Program example
[Section titled “Program example”](#program-example)
The following program example raises the error `ERR:001:Invalid Method` for any application call to methods different from `m1()void`.
```teal
#pragma version 10
txn ApplicationID
bz end
method "m1()void"
txn ApplicationArgs 0
match method1
byte "ERR:001:Invalid Method"
log
err
method1:
b end
end:
int 1
```
Full Algod API response of a failed execution:
```json
{
"data": {
"app-index":1004,
"eval-states": [
{
"logs": ["RVJSOjAwMTpJbnZhbGlkIE1ldGhvZA=="]
}
],
"group-index":0,
"pc":41
},
"message":"TransactionPool.Remember: transaction ESI4GHAZY46MCUCLPBSB5HBRZPGO6V7DDUM5XKMNVPIRJK6DDAGQ: logic eval error: err opcode executed. Details: app=1004, pc=41"
}
```
The `logs` array contains the `base64` encoded error `ERR:001:Invalid Method`.
The `logs` array **MAY** contain elements that are not errors (as specified by the regexp).
It is **NOT RECOMMENDED** to use the `message` field to retrieve errors.
### AVM Compilers
[Section titled “AVM Compilers”](#avm-compilers)
AVM compilers (and related tools) **SHOULD** provide two error compiling options:
1. The one specified in this ARC as **default**;
2. The one specified in [ARC-56](/arc-standards/arc-0056) as fallback, if compiled bytecode size exceeds the AVM limits.
> Compilers **MAY** optimize for program bytecode size by storing the error prefixes in the `bytecblock` and concatenating the error message at the cost of some extra opcodes.
## Rationale
[Section titled “Rationale”](#rationale)
This convention for AVM run time errors presents the following PROS and CONS.
**PROS:**
* No additional artifacts required to return informative run time errors;
* Errors are directly returned in the Algod API response, which can be filtered with the specified error regexp.
**CONS:**
* Errors consume program bytecode size.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
> Not applicable.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# ASA Parameters Conventions, Digital Media
> Alternatives conventions for ASAs containing digital media.
We introduce community conventions for the parameters of Algorand Standard Assets (ASAs) containing digital media.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of these conventions is to make it simpler to display the properties of a given ASA. This ARC differs from [ARC-3](/arc-standards/arc-0003) by focusing on optimization for fetching of digital media, as well as the use of onchain metadata. Furthermore, since asset configuration transactions are used to store the metadata, this ARC can be applied to existing ASAs.
While mutability helps with backwards compatibility and other use cases, like leveling up an RPG character, some use cases call for immutability. In these cases, the ASA manager MAY remove the manager address, after which point the Algorand network won’t allow anyone to send asset configuration transactions for the ASA. This effectively makes the latest valid [ARC-69](/arc-standards/arc-0069) metadata immutable.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
An ARC-69 ASA has an associated JSON Metadata file, formatted as specified below, that is stored on-chain in the note field of the most recent asset configuration transaction (that contains a note field with a valid ARC-69 JSON metadata).
### ASA Parameters Conventions
[Section titled “ASA Parameters Conventions”](#asa-parameters-conventions)
The ASA parameters should follow the following conventions:
* *Unit Name* (`un`): no restriction.
* *Asset Name* (`an`): no restriction.
* *Asset URL* (`au`): a URI pointing to digital media file. This URI:
* **SHOULD** be persistent.
* **SHOULD** link to a file small enough to fetch quickly in a gallery view.
* **MUST** follow [RFC-3986](https://www.ietf.org/rfc/rfc3986.txt) and **MUST NOT** contain any whitespace character.
* **SHOULD** specify media type with `#` fragment identifier at end of URL. This format **MUST** follow: `#i` for images, `#v` for videos, `#a` for audio, `#p` for PDF, or `#h` for HTML/interactive digital media. If unspecified, assume Image.
* **SHOULD** use one of the following URI schemes (for compatibility and security): *https* and *ipfs*:
* When the file is stored on IPFS, the `ipfs://...` URI **SHOULD** be used. IPFS Gateway URI (such as `https://ipfs.io/ipfs/...`) **SHOULD NOT** be used.
* **SHOULD NOT** use the following URI scheme: *http* (due to security concerns).
* *Asset Metadata Hash* (`am`): the SHA-256 digest of the full resolution media file as a 32-byte string (as defined in [NIST FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4) )
* **OPTIONAL**
* *Freeze Address* (`f`):
* **SHOULD** be empty, unless needed for royalties or other use cases
* *Clawback Address* (`c`):
* **SHOULD** be empty, unless needed for royalties or other use cases
There are no requirements regarding the manager account of the ASA, or the reserve account. However, if immutability is required the manager address **MUST** be removed.
Furthermore, the manager address, if present, **SHOULD** be under the control of the ASA creator, as the manager address can unilaterally change the metadata. Some advanced use cases **MAY** use a logicsig as ASA manager, if the logicsig only allows to set the note fields by the ASA creator.
### JSON Metadata File Schema
[Section titled “JSON Metadata File Schema”](#json-metadata-file-schema)
```json
{
"title": "Token Metadata",
"type": "object",
"properties": {
"standard": {
"type": "string",
"value": "arc69",
"description": "(Required) Describes the standard used."
},
"description": {
"type": "string",
"description": "Describes the asset to which this token represents."
},
"external_url": {
"type": "string",
"description": "A URI pointing to an external website. Borrowed from Open Sea's metadata format (https://docs.opensea.io/docs/metadata-standards)."
},
"media_url": {
"type": "string",
"description": "A URI pointing to a high resolution version of the asset's media."
},
"properties": {
"type": "object",
"description": "Properties following the EIP-1155 'simple properties' format. (https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1155.md#erc-1155-metadata-uri-json-schema)"
},
"mime_type": {
"type": "string",
"description": "Describes the MIME type of the ASA's URL (`au` field)."
},
"attributes": {
"type": "array",
"description": "(Deprecated. New NFTs should define attributes with the simple `properties` object. Marketplaces should support both the `properties` object and the `attributes` array). The `attributes` array follows Open Sea's format: https://docs.opensea.io/docs/metadata-standards#attributes"
}
},
"required":[
"standard"
]
}
```
The `standard` field is **REQUIRED** and **MUST** equal `arc69`. All other fields are **OPTIONAL**. If provided, the other fields **MUST** match the description in the JSON schema.
The URI field (`external_url`) is defined similarly to the Asset URL parameter `au`. However, contrary to the Asset URL, the `external_url` does not need to link to the digital media file.
#### MIME Type
[Section titled “MIME Type”](#mime-type)
In addition to specifying a data type in the ASA’s URL (`au` field) with a URI fragment (ex: `#v` for video), the JSON Metadata schema also allows indication of the URL’s MIME type (ex: `video/mp4`) via the `mime_type` field.
#### Examples
[Section titled “Examples”](#examples)
##### Basic Example
[Section titled “Basic Example”](#basic-example)
An example of an ARC-69 JSON Metadata file for a song follows. The properties array proposes some **SUGGESTED** formatting for token-specific display properties and metadata.
```json
{
"standard": "arc69",
"description": "arc69 theme song",
"external_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"mime_type": "video/mp4",
"properties": {
"Bass":"Groovy",
"Vibes":"Funky",
"Overall":"Good stuff"
}
}
```
An example of possible ASA parameters would be:
* *Asset Name*: `ARC-69 theme song` for example.
* *Unit Name*: `69TS` for example.
* *Asset URL*: `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT#v`
* *Metadata Hash*: the 32 bytes of the SHA-256 digest of the high resolution media file.
* *Total Number of Units*: 1
* *Number of Digits after the Decimal Point*: 0
#### Mutability
[Section titled “Mutability”](#mutability)
##### Rendering
[Section titled “Rendering”](#rendering)
Clients **SHOULD** render an ASA’s latest ARC-69 metadata. Clients **MAY** render an ASA’s previous ARC-69 metadata for changelogs or other historical features.
##### Updating ARC-69 metadata
[Section titled “Updating ARC-69 metadata”](#updating-arc-69-metadata)
If an ASA has a manager address, then the manager **MAY** update an ASA’s ARC-69 metadata. To do so, the manager sends a new `acfg` transaction with the entire metadata represented as JSON in the transaction’s `note` field.
##### Making ARC-69 metadata immutable
[Section titled “Making ARC-69 metadata immutable”](#making-arc-69-metadata-immutable)
Managers MAY make an ASA’s ARC-69 immutable. To do so, they MUST remove the ASA’s manager address with an `acfg` transaction.
##### ARC-69 attribute deprecation
[Section titled “ARC-69 attribute deprecation”](#arc-69-attribute-deprecation)
The initial version of ARC-69 followed the Open Sea attributes format . As illustrated below:
```plaintext
"attributes": {
"type": "array",
"description": "Attributes following Open Sea's attributes format (https://docs.opensea.io/docs/metadata-standards#attributes)."
}
```
This format is now deprecated. New NFTs **SHOULD** use the simple `properties` format, since it significantly reduces the metadata size.
To be fully compliant with the ARC-69 standard, both the `properties` object and the `attributes` array **SHOULD** be supported.
## Rationale
[Section titled “Rationale”](#rationale)
These conventions take inspiration from [Open Sea’s metadata standards](https://docs.opensea.io/docs/metadata-standards) and [EIP-1155](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1155.md#erc-1155-metadata-uri-json-schema)
to facilitate interoperobility.
The main differences are highlighted below:
* Asset Name, Unit Name, and URL are specified in the ASA parameters. This allows applications to efficiently display meaningful information, even if they aren’t aware of ARC-69 metadata.
* MIME types help clients more effectively fetch and render media.
* All asset metadata is stored onchain.
* Metadata can be either mutable or immutable.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Non-Transferable ASA
> Parameters Conventions Non-Transferable Algorand Standard Asset
## Abstract
[Section titled “Abstract”](#abstract)
The goal is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to identify & interact with a Non-transferable ASA (NTA).
This defines an interface extending [ARC-3](/arc-standards/arc-0003) & [ARC-69](/arc-standards/arc-0069) non fungible ASA to create Non-transferable ASA. Before issuance, both parties (issuer and receiver), have to agree on who has (if any) the authorization to burn this ASA.
> This spec is compatible with [ARC-19](/arc-standards/arc-0019) to create an updatable Non-transferable ASA.
## Motivation
[Section titled “Motivation”](#motivation)
The idea of Non-transferable ASAs has garnered significant attention, inspired by the concept of Soul Bound Tokens. However, without a clear definition, Non-transferable ASAs cannot achieve interoperability. Developing universal services targeting Non-transferable ASAs remains challenging without a minimal consensus on their implementation and lifecycle management.
This ARC envisions Non-transferable ASAs as specialized assets, akin to Soul Bound ASAs, that will serve as identities, credentials, credit records, loan histories, memberships, and much more. To provide the necessary flexibility in these use cases, Non-transferable ASAs must feature an application-specific burn method and a distinct way to differentiate themselves from regular ASAs.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
* There are 2 NTA actor roles: **Issuer** and **Holder**.
* There are 3 NTA ASA states, **Issued** , **Held** and **Revoked**.
* **Claimed** and **Revoked** NTAs reside in the holder’s wallet after claim , forever!
* The ASA parameter decimal places **Must** be 0 (Fractional NFTs are not allowed)
* The ASA parameter total supply **Must** be 1 (true Non-fungible token).
Note : On Algorand in order to prioritize the end users and power the decentralization, the end call to hold any ASA is given to the user so unless the user is the creator (which needs token deletion) the user can close out the token back to creator even if the token is frozen. After much discussions and feedbacks and many great proposed solutions by experts on the field, in respect to Algorand design, this ARC embraces this convention and leaves the right even to detach Non-transferable ASA and close it back to creator. As a summary [ARC-71](/arc-standards/arc-0071)NTA respects the account holder’s right to close out the ASA back to creator address.
### ASA Parameters Conventions
[Section titled “ASA Parameters Conventions”](#asa-parameters-conventions)
The Issued state is the starting state of the ASA.The claimed state is when NTA is sent to destination wallet (claimed) and The Revoked state is the state where the NTA ASA is revoked by issuer after issuance and therefore no longer valid for any usecase except for provenance and historical data reference.
* NTAs with Revoked state are no longer valid and cannot be used as a proof of any credentials.
* Manager address is able to revoke the NTA ASA by setting the Manager address to `ZeroAddress`.
* Issuer **MUST** be an Algorand Smart Contract Account.
#### Issued Non-transferable ASA
[Section titled “Issued Non-transferable ASA”](#issued-non-transferable-asa)
* The Creator parameter, the ASA **MAY** be created by any addresses.
* The Clawback parameter **MUST** be the `ZeroAddress`.
* The Freeze parameter **MUST** be set to the Issuer Address.
* The Manager parameter **MAY** be set to any address but is **RECOMMENDED** to be the Issuer.
* The Reserve parameter **MUST** be set to either [ARC-19](/arc-standards/arc-0019) metadata or NTA Issuer’s address.
#### Held (claimed) Non-transferable ASA
[Section titled “Held (claimed) Non-transferable ASA”](#held-claimed-non-transferable-asa)
* The Creator parameter, the ASA **MAY** be created by any addresses.
* The Clawback parameter **MUST** be the `ZeroAddress`.
* The Freeze parameter **MUST** be set to the `ZeroAddress`.
* The asset must be frozen for holder (claimer) account address.
* The Manager parameter **MAY** be set to any address but is **RECOMMENDED** to be the Issuer.
* The Reserve parameter **MUST** be set to either ARC-19 metadata or NTA Issuer’s address.
#### Revoked Non-transferable ASA
[Section titled “Revoked Non-transferable ASA”](#revoked-non-transferable-asa)
* The Manager parameter **MUST** be set to `ZeroAddress`.
## Rationale
[Section titled “Rationale”](#rationale)
### Non-transferable ASA NFT
[Section titled “Non-transferable ASA NFT”](#non-transferable-asa-nft)
Non-transferable ASA serves as a specialized subset of the existing ASAs. The advantage of such design is seamless compatibility of Non-transferable ASA with existing NFT services. Service providers can treat Non-transferable ASA NFTs like other ASAs and do not need to make drastic changes to their existing codebase.
### Revoking vs Burning
[Section titled “Revoking vs Burning”](#revoking-vs-burning)
Rationale for Revocation Over Burning in Non-Transferable ASAs (NTAs):
The concept of Non-Transferable ASAs (NTAs) is rooted in permanence and attachment to the holder. Introducing a “burn” mechanism for NTAs fundamentally contradicts this concept because it involves removing the token from the holder’s wallet entirely. Burning suggests destruction and detachment, which is inherently incompatible with the idea of something being bound to the holder for life.
In contrast, a revocation mechanism aligns more closely with both the Non-Transferable philosophy and established W3C standards, particularly in the context of Verifiable Credentials (VCs). Revocation allows for NTAs to remain in the user’s wallet, maintaining provenance, historical data, and records of the token’s existence, while simultaneously marking the token as inactive or revoked by its issuer. This is achieved by setting the Manager address of the token to the ZeroAddress, effectively signaling that the token is no longer valid without removing it from the wallet.
For example, in cases where a Verifiable Credential (VC) issued as an NTA expires or needs to be invalidated (e.g., a driver’s license), revocation becomes an essential operation. The token can be revoked by the issuer without being deleted from the user’s wallet, preserving a clear record of its prior existence and revocation status. This is beneficial for provenance tracking and compliance, as historical records are crucial in many scenarios. Furthermore, the token can be used as a reference for re-issued or updated credentials without breaking its attachment to the holder.
This approach has clear benefits:
Provenance and Historical Data: Keeping the NTA in the wallet allows dApps and systems to track the history of revoked tokens, enabling insights into previous credentials or claims. Re-usability and Compatibility: NTAs with revocation fit well into W3C and DIF standards around re-usable DIDs (Decentralized Identifiers) and VCs, allowing credentials to evolve (e.g., switching from one issuer to another) without breaking the underlying identity or trust models. Immutable Attachment: The token does not leave the wallet, making it clear that the NTA is still part of the user’s identity, but with a revoked status. In contrast, burning would not allow for these records to be maintained, and would break the “bound” nature of the NTA by removing the token from the holder’s possession entirely, which defeats the core idea behind NTAs.
In summary, revocation offers a more interoperable alternative to burning for NTAs. It ensures that NTAs remain Non-Transferable while allowing for expiration, invalidation, or issuer changes, all while maintaining a record of the token’s lifecycle and status.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
[ARC-3](/arc-standards/arc-0003), [ARC-69](/arc-standards/arc-0069), [ARC-19](/arc-standards/arc-0019) ASAs can be converted into a NTA ASA, only if the manager address & freeze address are still available.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
* Claiming/Receiving a NTA ASA will lock Algo forever until user decides to close it out back to creator address.
* For security critical implementations it is vital to take into account that according to Algorand design, the user has the right to close out the ASA back to creator address. This is certainly kept on chain transaction history and indexers.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Smart Contract NFT Specification
> Base specification for non-fungible tokens implemented as smart contracts.
## Abstract
[Section titled “Abstract”](#abstract)
This specifies an interface for non-fungible tokens (NFTs) to be implemented on Algorand as smart contracts. This interface defines a minimal interface for NFTs to be owned and traded, to be augmented by other standard interfaces and custom methods.
## Motivation
[Section titled “Motivation”](#motivation)
Currently most NFTs in the Algorand ecosystem are implemented as ASAs. However, to provide rich extra functionality, it can be desirable to implement NFTs as a smart contract instead. To foster an interoperable NFT ecosystem, it is necessary that the core interfaces for NFTs be standardized.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Core NFT specification
[Section titled “Core NFT specification”](#core-nft-specification)
A smart contract NFT that is compliant with this standard must implement the interface detection standard defined in [ARC-73](/arc-standards/arc-0073).
Additionally, the smart contract MUST implement the following interface:
```json
{
"name": "ARC-72",
"desc": "Smart Contract NFT Base Interface",
"methods": [
{
"name": "arc72_ownerOf",
"desc": "Returns the address of the current owner of the NFT with the given tokenId",
"readonly": true,
"args": [
{ "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" },
],
"returns": { "type": "address", "desc": "The current owner of the NFT." }
},
{
"name": "arc72_transferFrom",
"desc": "Transfers ownership of an NFT",
"readonly": false,
"args": [
{ "type": "address", "name": "from" },
{ "type": "address", "name": "to" },
{ "type": "uint256", "name": "tokenId" }
],
"returns": { "type": "void" }
},
],
"events": [
{
"name": "arc72_Transfer",
"desc": "Transfer ownership of an NFT",
"args": [
{
"type": "address",
"name": "from",
"desc": "The current owner of the NFT"
},
{
"type": "address",
"name": "to",
"desc": "The new owner of the NFT"
},
{
"type": "uint256",
"name": "tokenId",
"desc": "The ID of the transferred NFT"
}
]
}
]
}
```
Ownership of a token ID by the zero address indicates that ID is invalid. The `arc72_ownerOf` method MUST return the zero address for invalid token IDs. The `arc72_transferFrom` method MUST error when `from` is not the owner of `tokenId`. The `arc72_transferFrom` method MUST error unless called by the owner of `tokenId` or an approved operator as defined by an extension such as the transfer management extension defined in this ARC. The `arc72_transferFrom` method MUST emit a `arc72_Transfer` event a transfer is successful. A `arc72_Transfer` event SHOULD be emitted, with `from` being the zero address, when a token is first minted. A `arc72_Transfer` event SHOULD be emitted, with `to` being the zero address, when a token is destroyed.
All methods in this and other interfaces defined throughout this standard that are marked as `readonly` MUST be read-only as defined by [ARC-22](/arc-standards/arc-0022).
The ARC-73 interface selector for this core interface is `0x53f02a40`.
### Metadata Extension
[Section titled “Metadata Extension”](#metadata-extension)
A smart contract NFT that is compliant with this metadata extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface:
```json
{
"name": "ARC-72 Metadata Extension",
"desc": "Smart Contract NFT Metadata Interface",
"methods": [
{
"name": "arc72_tokenURI",
"desc": "Returns a URI pointing to the NFT metadata",
"readonly": true,
"args": [
{ "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" },
],
"returns": { "type": "byte[256]", "desc": "URI to token metadata." }
}
],
}
```
URIs shorter than the return length MUST be padded with zero bytes at the end of the URI. The token URI returned SHOULD be an `ipfs://...` URI so the metadata can’t expire or be changed by a lapse or takeover of a DNS registration. The token URI SHOULD NOT be an `http://` URI due to security concerns. The URI SHOULD resolve to a JSON file following :
* the JSON Metadata File Schema defined in [ARC-3](/arc-standards/arc-0003).
* the standard for declaring traits defined in [ARC-16](/arc-standards/arc-0016).
Future standards could define new recommended URI or file formats for metadata.
The ARC-73 interface selector for this metadata extension interface is `0xc3c1fc00`.
### Transfer Management Extension
[Section titled “Transfer Management Extension”](#transfer-management-extension)
A smart contract NFT that is compliant with this transfer management extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface:
```json
{
"name": "ARC-72 Transfer Management Extension",
"desc": "Smart Contract NFT Transfer Management Interface",
"methods": [
{
"name": "arc72_approve",
"desc": "Approve a controller for a single NFT",
"readonly": false,
"args": [
{ "type": "address", "name": "approved", "desc": "Approved controller address" },
{ "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" },
],
"returns": { "type": "void" }
},
{
"name": "arc72_setApprovalForAll",
"desc": "Approve an operator for all NFTs for a user",
"readonly": false,
"args": [
{ "type": "address", "name": "operator", "desc": "Approved operator address" },
{ "type": "bool", "name": "approved", "desc": "true to give approval, false to revoke" },
],
"returns": { "type": "void" }
},
{
"name": "arc72_getApproved",
"desc": "Get the current approved address for a single NFT",
"readonly": true,
"args": [
{ "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" },
],
"returns": { "type": "address", "desc": "address of approved user or zero" }
},
{
"name": "arc72_isApprovedForAll",
"desc": "Query if an address is an authorized operator for another address",
"readonly": true,
"args": [
{ "type": "address", "name": "owner" },
{ "type": "address", "name": "operator" },
],
"returns": { "type": "bool", "desc": "whether operator is authorized for all NFTs of owner" }
},
],
"events": [
{
"name": "arc72_Approval",
"desc": "An address has been approved to transfer ownership of the NFT",
"args": [
{
"type": "address",
"name": "owner",
"desc": "The current owner of the NFT"
},
{
"type": "address",
"name": "approved",
"desc": "The approved user for the NFT"
},
{
"type": "uint256",
"name": "tokenId",
"desc": "The ID of the NFT"
}
]
},
{
"name": "arc72_ApprovalForAll",
"desc": "Operator set or unset for all NFTs defined by this contract for an owner",
"args": [
{
"type": "address",
"name": "owner",
"desc": "The current owner of the NFT"
},
{
"type": "address",
"name": "operator",
"desc": "The approved user for the NFT"
},
{
"type": "bool",
"name": "approved",
"desc": "Whether operator is authorized for all NFTs of owner "
}
]
},
]
}
```
The `arc72_Approval` event MUST be emitted when the `arc72_approve` method is called successfully. The zero address for the `arc72_approve` method and the `arc72_Approval` event indicate no approval, including revocation of previous single NFT controller. When a `arc72_Transfer` event emits, this also indicates that the approved address for that NFT (if any) is reset to none. The `arc72_ApprovalForAll` event MUST be emitted when the `arc72_setApprovalForAll` method is called successfully. The contract MUST allow multiple operators per owner. The `arc72_transferFrom` method, when its `nftId` argument is owned by its `from` argument, MUST succeed for when called by an address that is approved for the given NFT or approved as operator for the owner.
The ARC-73 interface selector for this transfer management extension interface is `0xb9c6f696`.
### Enumeration Extension
[Section titled “Enumeration Extension”](#enumeration-extension)
A smart contract NFT that is compliant with this enumeration extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface:
```json
{
"name": "ARC-72 Enumeration Extension",
"desc": "Smart Contract NFT Enumeration Interface",
"methods": [
{
"name": "arc72_balanceOf",
"desc": "Returns the number of NFTs owned by an address",
"readonly": true,
"args": [
{ "type": "address", "name": "owner" },
],
"returns": { "type": "uint256" }
},
{
"name": "arc72_totalSupply",
"desc": "Returns the number of NFTs currently defined by this contract",
"readonly": true,
"args": [],
"returns": { "type": "uint256" }
},
{
"name": "arc72_tokenByIndex",
"desc": "Returns the token ID of the token with the given index among all NFTs defined by the contract",
"readonly": true,
"args": [
{ "type": "uint256", "name": "index" },
],
"returns": { "type": "uint256" }
},
],
}
```
The sort order for NFT indices is not specified. The `arc72_tokenByIndex` method MUST error when `index` is greater than `arc72_totalSupply`.
The ARC-73 interface selector for this enumeration extension interface is `0xa57d4679`.
## Rationale
[Section titled “Rationale”](#rationale)
This specification is based on [ERC-721](https://eips.ethereum.org/EIPS/eip-721), with some differences.
### Core Specification
[Section titled “Core Specification”](#core-specification)
The core specification differs from ERC-721 by:
* removing `safeTransferFrom`, since there is not a test for whether an address on Algorand corresponds to a smart contract
* moving management functionality out of the base specification into an extension
* moving balance query functionality out of the base specification into the enumeration extension
Moving functionality out of the core specification into extensions allows the base specification to be much simpler, and allows extensions for extra capabilities to evolve separately from the core idea of owning and transferring ownership of non-fungible tokens. It is recommended that NFT contract authors make use of extensions to enrich the capabilities of their NFTs.
### Metadata Extension
[Section titled “Metadata Extension”](#metadata-extension-1)
The metadata extension differns from the ERC-721 metadata extension by using a fixed-length URI return and removing the `symbol` and `name` operations. Metadata such as symbol or name can be included in the metadata pointed to by the URI.
### Transfer Management Extension
[Section titled “Transfer Management Extension”](#transfer-management-extension-1)
The transfer management extension is taken from the set of methods and events from the base ERC-721 specification that deal with approving other addresses to transfer ownership of an NFT. This functionality is important for trusted NFT galleries like OpenSea to list and sell NFTs on behalf of users while allowing the owner to maintain on-chain ownership. However, this set of functionality is the bulk of the complexity of the ERC-721 standard, and moving it into an extension vastly simplifies the core NFT specification. Additionally, other interfaces have been proposed to allow for the sale of NFTs in decentralized manners without needing to give transfer control to a trusted third party.
### Enumeration Extension
[Section titled “Enumeration Extension”](#enumeration-extension-1)
The enumeration extension is taken from the ERC-721 enumeration extension. However, it also includes the `arc72_balanceOf` function that is included in the base ERC-721 specification. This change simplifies the core standard and groups the `arc72_balanceOf` function with related functionality for contracts where supply details are desired.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
This standard introduces a new kind of NFT that is incompatible with NFTs defined as ASAs. Applications that want to index, manage, or view NFTs on Algorand will need to handle these new smart NFTs as well as the already popular ASA implementation of NFTs will need to add code to handle both, and existing smart contracts that handle ASA-based NFTs will not work with these new smart contract NFTs.
While this is a severe backwards incompatibility, smart contract NFTs are necessary to provide richer and more diverse functionality for NFTs.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The fact that anybody can create a new implementation of a smart contract NFT standard opens the door for many of those implementations to contain security bugs. Additionally, malicious NFT implementations could contain hidden anti-features unexpected by users. As with other smart contract domains, it is difficult for users to verify or understand security properties of smart contract NFTs. This is a tradeoff compared with ASA NFTs, which share a smaller set of security properties that are easier to validate, to gain the possibility of adding novel features.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Interface Detection Spec
> A specification for smart contracts and indexers to detect interfaces of smart contracts.
## Abstract
[Section titled “Abstract”](#abstract)
This ARC specifies an interface detection interface based on [ERC-165](https://eips.ethereum.org/EIPS/eip-165). This interface allows smart contracts and indexers to detect whether a smart contract implements a particular interface based on an interface selector.
## Motivation
[Section titled “Motivation”](#motivation)
[ARC-4](/arc-standards/arc-0004) applications have associated Contract or Interface description JSON objects that allow users to call their methods. However, these JSON objects are communicated outside of the consensus network. Therefore indexers can not reliably identify contract instances of a particular interface, and smart contracts have no way to detect whether another contract supports a particular interface. An on-chain method to detect interfaces allows greater composability for smart contracts, and allows indexers to automatically detect implementations of interfaces of interest.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### How Interfaces are Identified
[Section titled “How Interfaces are Identified”](#how-interfaces-are-identified)
The specification for interfaces is defined by [ARC-4](/arc-standards/arc-0004). This specification extends ARC-4 to define the concept of an interface selector. We define the interface selector as the XOR of all selectors in the interface. Selectors in the interface include selectors for methods, selectors for events as defined by [ARC-28](/arc-standards/arc-0028), and selectors for potential future kinds of interface components.
As an example, consider an interface that has two methods and one event, `add(uint64,uint64)uint128`, `add3(uint64,uint64,uint64)uint128`, and `alert(uint64)`. The method selector for the `add` method is the first 4 bytes of the method signature’s SHA-512/256 hash. The SHA-512/256 hash of `add(uint64,uint64)uint128` is `0x8aa3b61f0f1965c3a1cbfa91d46b24e54c67270184ff89dc114e877b1753254a`, so its method selector is `0x8aa3b61f`. The SHA-512/256 hash of `add3(uint64,uint64,uint64)uint128` is `0xa6fd1477731701dd2126f24facf3492d470cf526e7d4d849fea33d102b45f03d`, so its method selector is `0xa6fd1477` The SHA-512/256 hash of `alert(uint64)` is `0xc809efe9fd45417226d52b605658b83fff27850a01efeea30f694d1e112d5463`, so its method selector is `0xc809efe9` The interface selector is defined as the bitwise exclusive or of all method and event selectors, so the interface selector is `0x8aa3b61f XOR 0xa6fd1477 XOR 0xc809efe9`, which is `0xe4574d81`.
### How a Contract will Publish the Interfaces it Implements for Detection
[Section titled “How a Contract will Publish the Interfaces it Implements for Detection”](#how-a-contract-will-publish-the-interfaces-it-implements-for-detection)
In addition to out-of-band JSON contract or interface description data, a contract that is compliant with this specification shall implement the following interface:
```json
{
"name": "ARC-73",
"desc": "Interface for interface detection",
"methods": [
{
"name": "supportsInterface",
"desc": "Detects support for an interface specified by selector.",
"readonly": true,
"args": [
{ "type": "byte[4]", "name": "interfaceID", "desc": "The selector of the interface to detect." },
],
"returns": { "type": "bool", "desc": "Whether the contract supports the interface." }
}
]
}
```
The `supportsInterface` method must be `readonly` as specified by [ARC-22](/arc-standards/arc-0022).
The implementing contract must have a `supportsInterface` method that returns:
* `true` when `interfaceID` is `0x4e22a3ba` (the selector for [ARC-73](/arc-standards/arc-0073), this interface)
* `false` when `interfaceID` is `0xffffffff`
* `true` for any other `interfaceID` the contract implements
* `false` for any other `interfaceID`
## Rationale
[Section titled “Rationale”](#rationale)
This specification is nearly identical to the related specification for Ethereum, [ERC-165](https://eips.ethereum.org/EIPS/eip-165), merely adapted to Algorand.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
It is possible that a malicious contract may lie about interface support. This interface makes it easier for all kinds of actors, inclulding malicious ones, to interact with smart contracts that implement it.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# NFT Indexer API
> REST API for reading data about Application's NFTs.
## Abstract
[Section titled “Abstract”](#abstract)
This specifies a REST interface that can be implemented by indexing services to provide data about NFTs conforming to the [ARC-72](/arc-standards/arc-0072) standard.
## Motivation
[Section titled “Motivation”](#motivation)
While most data is available on-chain, reading and analyzing on-chain logs to get a complete and current picture about NFT ownership and history is slow and impractical for many uses. This REST interface standard allows analysis of NFT contracts to be done in a centralized manner to provide fast, up-to-date responses to queries, while allowing users to pick from any indexing provider.
## Specification
[Section titled “Specification”](#specification)
This specification defines two REST endpoints: `/nft-index/v1/tokens` and `/nft-index/v1/transfers`. Both endpoints respond only to `GET` requests, take no path parameters, and consume no input. But both accept a variety of query parameters.
### `GET /nft-indexer/v1/tokens`
[Section titled “GET /nft-indexer/v1/tokens”](#get-nft-indexerv1tokens)
Produces `application/json`.
Optional Query Parameters:
| Name | Schema | Description |
| -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------ |
| round | integer | Include results for the specified round. For performance reasons, this parameter may be disabled on some configurations. |
| next | string | Token for the next page of results. Use the `next-token` provided by the previous page of results. |
| limit | integer | Maximum number of results to return. There could be additional pages even if the limit is not reached. |
| contractId | integer | Limit results to NFTs implemented by the given contract ID. |
| tokenId | integer | Limit results to NFTs with the given token ID. |
| owner | address | Limit results to NFTs owned by the given owner. |
| mint-min-round | integer | Limit results to NFTs minted on or after the given round. |
| mint-max-round | integer | Limit results to NFTs minted on or before the given round. |
When successful, returns a response with code 200 and an object with the schema:
| Name | Required? | Schema | Description |
| ------------- | --------- | ------- | -------------------------------------------------------------------------------------------- |
| tokens | required | array | Array of Token objects that fit the query parameters, as defined below. |
| current-round | required | integer | Round at which the results were computed. |
| next-token | optional | string | Used for pagination, when making another request provide this token as the `next` parameter. |
The `Token` object has the following schema:
| Name | Required? | Schema | Description |
| ----------- | --------- | ------- | -------------------------------------------------------------------------------------------------------------------------- |
| owner | required | address | The current owner of the NFT. |
| contractId | required | integer | The ID of the ARC-72 contract that defines the NFT. |
| tokenId | required | integer | The tokenID of the NFT, which along with the contractId addresses a unique ARC-72 token. |
| mint-round | optional | integer | The round at which the NFT was minted (IE the round at which it was transferred from the zero address to the first owner). |
| metadataURI | optional | string | The URI given for the token by the `metadataURI` API of the contract, if applicable. |
| metadata | optional | object | The result of resolving the `metadataURI`, if applicable and available. |
When unsuccessful, returns a response with code 400 or 500 and an object with the schema:
| Name | Required? | Schema |
| ------- | --------- | ------ |
| data | optional | object |
| message | required | string |
### `GET /nft-indexer/v1/transfers`
[Section titled “GET /nft-indexer/v1/transfers”](#get-nft-indexerv1transfers)
Produces `application/json`.
Optional Query Parameters:
| Name | Schema | Description |
| ---------- | ------- | ------------------------------------------------------------------------------------------------------------------------ |
| round | integer | Include results for the specified round. For performance reasons, this parameter may be disabled on some configurations. |
| next | string | Token for the next page of results. Use the `next-token` provided by the previous page of results. |
| limit | integer | Maximum number of results to return. There could be additional pages even if the limit is not reached. |
| contractId | integer | Limit results to NFTs implemented by the given contract ID. |
| tokenId | integer | Limit results to NFTs with the given token ID. |
| user | address | Limit results to transfers where the user is either the sender or receiver. |
| from | address | Limit results to transfers with the given address as the sender. |
| to | address | Limit results to transfers with the given address as the receiver. |
| min-round | integer | Limit results to transfers that were executed on or after the given round. |
| max-round | integer | Limit results to transfers that were executed on or before the given round. |
When successful, returns a response with code 200 and an object with the schema:
| Name | Required? | Schema | Description |
| ------------- | --------- | ------- | -------------------------------------------------------------------------------------------- |
| transfers | required | array | Array of Transfer objects that fit the query parameters, as defined below. |
| current-round | required | integer | Round at which the results were computed. |
| next-token | optional | string | Used for pagination, when making another request provide this token as the `next` parameter. |
The `Transfer` object has the following schema:
| Name | Required? | Schema | Description |
| ---------- | --------- | ------- | ---------------------------------------------------------------------------------------- |
| contractId | required | integer | The ID of the ARC-72 contract that defines the NFT. |
| tokenId | required | integer | The tokenID of the NFT, which along with the contractId addresses a unique ARC-72 token. |
| from | required | address | The sender of the transaction. |
| to | required | address | The receiver of the transaction. |
| round | required | integer | The round of the transfer. |
When unsuccessful, returns a response with code 400 or 500 and an object with the schema:
| Name | Required? | Schema |
| ------- | --------- | ------ |
| data | optional | object |
| message | required | string |
## Rationale
[Section titled “Rationale”](#rationale)
This standard was designed to feel similar to the Algorand indexer API, and uses the same query parameters and results where applicable.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
This standard presents a versioned REST interface, allowing future extensions to change the interface in incompatible ways while allowing for the old service to run in tandem.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
All data available through this indexer API is publicly available.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Password Account
> Password account using PBKDF2
## Abstract
[Section titled “Abstract”](#abstract)
This standard specifies a computation for seed bytes for Password Account. For general adoption it is better for people to remember passphrase than mnemonic. With this standard person can hash the passphrase and receive the seed bytes for X25529 algorand account.
## Motivation
[Section titled “Motivation”](#motivation)
By providing a clear and precise computation process, Password Account empowers individuals to effortlessly obtain their seed bytes for algorand account. In the realm of practicality and widespread adoption, the standard highlights the immense advantages of utilizing a passphrase rather than a mnemonic. Remembering a passphrase becomes the key to unlocking a world of possibilities. With this groundbreaking standard, individuals can take control of their X25529 Algorand account by simply hashing their passphrase and effortlessly receiving the corresponding seed bytes. It’s time to embrace this new era of accessibility and security, empowering yourself to reach new heights in the world of Password Accounts. Let this standard serve as your guiding light, motivating community to embark on a journey of limitless possibilities and unparalleled success.
This standard seek the synchronization between wallets which may provide password protected accounts.
## Specification
[Section titled “Specification”](#specification)
Seed bytes generation is calculated with algorithm:
```plaintext
const init = `ARC-0076-${password}-{slotId}-PBKDF2-999999`;
const salt = `ARC-0076-{slotId}-PBKDF2-999999`;
const iterations = 999999;
const cryptoKey = await window.crypto.subtle.importKey(
"raw",
Buffer.from(init, "utf-8"),
"PBKDF2",
false,
["deriveBits", "deriveKey"]
);
const masterBits = await window.crypto.subtle.deriveBits(
{
name: "PBKDF2",
hash: "SHA-256",
salt: Buffer.from(salt, "utf-8"),
iterations: iterations,
},
cryptoKey,
256
);
const uint8 = new Uint8Array(masterBits);
const mnemonic = algosdk.mnemonicFromSeed(uint8);
const genAccount = algosdk.mnemonicToSecretKey(mnemonic);
```
Length of the data section SHOULD be at least 16 bytes long.
Slot ID is account iteration. Default is “0”.
### Email Password account
[Section titled “Email Password account”](#email-password-account)
Email Password account is account generated from the original data
```plaintext
const init = `ARC-0076-${email}-${password}-{slotId}-PBKDF2-999999`;
const salt = `ARC-0076-${email}-{slotId}-PBKDF2-999999`;
```
The email part can be published to the service provider backend and verified by the service provider. Password MUST NOT be transferred over the network.
Length of the password SHOULD be at least 16 bytes long.
### Sample data
[Section titled “Sample data”](#sample-data)
This sample data may be used for verification of the `ARC-0076` implementation.
```plaintext
const email = "email@example.com";
const password = "12345678901234567890123456789012345678901234567890";
const slotId = "0";
const init = `ARC-0076-${email}-${password}-{slotId}-PBKDF2-999999`;
const salt = `ARC-0076-${email}-{slotId}-PBKDF2-999999`;
```
Results in:
```plaintext
masterBits = [225,7,139,154,245,210,181,138,188,129,145,53,246,184,243,88,163,163,109,208,77,71,7,235,81,244,129,215,102,168,105,21]
account.addr = "5AHWQJ5D52K4GRW4JWQ5GMR53F7PDSJEGT4PXVFSBQYE7VXDVG3WSPWSBM"
```
## Rationale
[Section titled “Rationale”](#rationale)
This standard was designed to allow the wallets to provide password protected accounts which does not require general population to store the mnemonic. Email extension allows service providers to bind specific account with the email address, and user experience to feel the basic authentication form with email and password they are already used to from web2 usecases.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
We expect future extensions to be compatible with Password account. The hash mechanism for the future algorighms should be suffixed such as `-PBKDF2-999999`.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
This standard moves the security of strength of the account to how user generates the password.
This standard relies on randomness and collision resistance of PBKDF2 and ‘SHA-256’. User MUST be informed about the risks associated with this type of account.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# URI scheme, keyreg Transactions extension
> A specification for encoding Key Registration Transactions in a URI format.
## Abstract
[Section titled “Abstract”](#abstract)
This URI specification represents an extension to the base Algorand URI encoding standard ([ARC-26](/arc-standards/arc-0026)) that specifies encoding of key registration transactions through deeplinks, QR codes, etc.
## Specification
[Section titled “Specification”](#specification)
### General format
[Section titled “General format”](#general-format)
As in [ARC-26](/arc-standards/arc-0026), URIs follow the general format for URIs as set forth in [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). The path component consists of an Algorand address, and the query component provides additional transaction parameters.
Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986. The binary components (votekey, selkey, etc.) must be encoded with base64url as specified in [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5).
### Scope
[Section titled “Scope”](#scope)
This ARC explicitly supports the two major subtypes of key registration transactions:
* Online keyreg transcation
* Declares intent to participate in consensus and configures required keys
* Offline keyreg transaction
* Declares intent to stop participating in consensus
The following variants of keyreg transactions are not defined:
* Non-participating keyreg transcation
* This transaction subtype is considered deprecated
* Heartbeat keyreg transaction
* This transaction subtype will be included in the future block incentives protocol. The protocol specifies that this transaction type must be submitted by a node in response to a programmatic “liveness challenge”. It is not meant to be signed or submitted by an end user.
### ABNF Grammar
[Section titled “ABNF Grammar”](#abnf-grammar)
```plaintext
algorandurn = "algorand://" algorandaddress [ "?" keyregparams ]
algorandaddress = *base32
keyregparams = keyregparam [ "&" keyregparams ]
keyregparam = [ typeparam / votekeyparam / selkeyparam / sprfkeyparam / votefstparam / votelstparam / votekdparam / noteparam / feeparam / otherparam ]
typeparam = "type=keyreg"
votekeyparam = "votekey=" *qbase64url
selkeyparam = "selkey=" *qbase64url
sprfkeyparam = "sprfkey=" *qbase64url
votefstparam = "votefst=" *qdigit
votelstparam = "votelst=" *qdigit
votekdparam = "votekdkey=" *qdigit
noteparam = (xnote | note)
xnote = "xnote=" *qchar
note = "note=" *qchar
fee = "fee=" *qdigit
otherparam = qchar *qchar [ "=" *qchar ]
```
* “qbase64url” corresponds to valid characters of “base64url” encoding, as defined in [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5)
* “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators.
As in the base [ARC-26](/arc-standards/arc-0026) standard, the scheme component (“algorand:”) is case-insensitive, and implementations must accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys.
### Query Keys
[Section titled “Query Keys”](#query-keys)
* address: Algorand address of transaction sender. Required.
* type: fixed to “keyreg”. Used to disambiguate the transaction type from the base [ARC-26](/arc-standards/arc-0026) standard and other possible extensions. Required.
* votekeyparam: The vote key parameter to use in the transaction. Encoded with [base64url](https://www.rfc-editor.org/rfc/rfc4648.html#section-5) encoding. Required for keyreg online transactions.
* selkeyparam: The selection key parameter to use in the transaction. Encoded with [base64url](https://www.rfc-editor.org/rfc/rfc4648.html#section-5) encoding. Required for keyreg online transactions.
* sprfkeyparam: The state proof key parameter to use in the transaction. Encoded with [base64url](https://www.rfc-editor.org/rfc/rfc4648.html#section-5) encoding. Required for keyreg online transactions.
* votefstparam: The first round on which the voting keys will valid. Required for keyreg online transactions.
* votelstparam: The last round on which the voting keys will be valid. Required for keyreg online transactions.
* votekdparam: The key dilution key parameter to use. Required for keyreg online transactions.
* xnote: As in [ARC-26](/arc-standards/arc-0026). A URL-encoded notes field value that must not be modifiable by the user when displayed to users. Optional.
* note: As in [ARC-26](/arc-standards/arc-0026). A URL-encoded default notes field value that the the user interface may optionally make editable by the user. Optional.
* fee: Optional. A static fee to set for the transaction in microAlgos. Useful to signal intent to receive participation incentives (e.g. with a 2,000,000 microAlgo transaction fee.) Optional.
* (others): optional, for future extensions
### Appendix
[Section titled “Appendix”](#appendix)
This section contains encoding examples. The raw transaction object is presented along with the resulting [ARC-78](/arc-standards/arc-0078) URI encoding.
#### Encoding keyreg online transactioon with minimum fee
[Section titled “Encoding keyreg online transactioon with minimum fee”](#encoding-keyreg-online-transactioon-with-minimum-fee)
The following raw keyreg transaction:
```plaintext
{
"txn": {
"fee": 1000,
"fv": 1345,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 2345,
"selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=",
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==",
"type": "keyreg",
"votefst": 1300,
"votekd": 100,
"votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=",
"votelst": 11300
}
}
```
Will result in this ARC-78 encoded URI:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?
type=keyreg
&selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c
&sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg
&votefst=1300
&votekd=100
&votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI
&votelst=11300
```
Note: newlines added for readability.
Note the difference between base64 encoding in the raw object and base64url encoding in the URI parameters. For example, the selection key parameter `selkey` that begins with `+lfw+` in the raw object is encoded in base64url to `-lfw-`.
Note: Here, the fee is omitted from the URI (due to being set to the minimum 1,000 microAlgos.) When the fee is omitted, it is left up to the application or wallet to decide. This is for demonstrative purposes - the ARC-78 standard does not require this behavior.
#### Encoding keyreg offline transactioon
[Section titled “Encoding keyreg offline transactioon”](#encoding-keyreg-offline-transactioon)
The following raw keyreg transaction:
```plaintext
{
"txn": {
"fee": 1000,
"fv": 1776240,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 1777240,
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"type": "keyreg"
}
}
```
Will result in this ARC-78 encoded URI:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?type=keyreg
```
This offline keyreg transaction encoding is the smallest compatible ARC-78 representation.
#### Encoding keyreg online transactioon with custom fee and note
[Section titled “Encoding keyreg online transactioon with custom fee and note”](#encoding-keyreg-online-transactioon-with-custom-fee-and-note)
The following raw keyreg transaction:
```plaintext
{
"txn": {
"fee": 2000000,
"fv": 1345,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 2345,
"note:b64": "Q29uc2Vuc3VzIHBhcnRpY2lwYXRpb24gZnR3",
"selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=",
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==",
"type": "keyreg",
"votefst": 1300,
"votekd": 100,
"votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=",
"votelst": 11300
}
}
```
Will result in this ARC-78 encoded URI:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?
type=keyreg
&selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c
&sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg
&votefst=1300
&votekd=100
&votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI
&votelst=11300
&fee=2000000
¬e=Consensus%2Bparticipation%2Bftw
```
Note: newlines added for readability.
## Rationale
[Section titled “Rationale”](#rationale)
The present aims to provide a standardized way to encode key registration transactions in order to enhance the user experience of signing key registration transactions in general, and in particular in the use case of an Algorand node runner that does not have their spending keys resident on their node (as is best practice.)
The parameter names were chosen to match the corresponding names in encoded key registration transactions.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# URI scheme, App NoOp call extension
> A specification for encoding NoOp Application call Transactions in a URI format.
## Abstract
[Section titled “Abstract”](#abstract)
NoOp calls are Generic application calls to execute the Algorand smart contract ApprovalPrograms.
This URI specification proposes an extension to the base Algorand URI encoding standard ([ARC-26](/arc-standards/arc-0026)) that specifies encoding of application NoOp transactions into [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986) standard URIs.
## Specification
[Section titled “Specification”](#specification)
### General format
[Section titled “General format”](#general-format)
As in [ARC-26](/arc-standards/arc-0026), URIs follow the general format for URIs as set forth in [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). The path component consists of an Algorand address, and the query component provides additional transaction parameters.
Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence **MUST** be percent-encoded as described in RFC 3986. The binary components (args, refs, etc.) **MUST** be encoded with base64url as specified in [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5).
### ABNF Grammar
[Section titled “ABNF Grammar”](#abnf-grammar)
```plaintext
algorandurn = "algorand://" algorandaddress [ "?" noopparams ]
algorandaddress = *base32
noopparams = noopparam [ "&" noopparams ]
noopparam = [ typeparam / appparam / methodparam / argparam / boxparam / assetarrayparam / accountarrayparam / apparrayparam / feeparam / otherparam ]
typeparam = "type=appl"
appparam = "app=" *digit
methodparam = "method=" *qchar
boxparam = "box=" *qbase64url
argparam = "arg=" (*qchar | *digit)
feeparam = "fee=" *digit
accountparam = "account=" *base32
assetparam = "asset=" *digit
otherparam = qchar *qchar [ "=" *qchar ]
```
* “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators.
* “qbase64url” corresponds to valid characters of “base64url” encoding, as defined in [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5)
* All params from the base [ARC-26](/arc-standards/arc-0026) standard, are supported and usable if fit the NoOp application call context (e.g. note)
* As in the base [ARC-26](/arc-standards/arc-0026) standard, the scheme component (“algorand:”) is case-insensitive, and implementations **MUST** accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys.
### Query Keys
[Section titled “Query Keys”](#query-keys)
* address: Algorand address of transaction sender
* type: fixed to “appl”. Used to disambiguate the transaction type from the base [ARC-26](/arc-standards/arc-0026) standard and other possible extensions
* app: The first reference is set to specify the called application (Algorand Smart Contract) ID and is mandatory. Additional references are optional and will be used in the Application NoOp call’s foreign applications array.
* method: Specify the full method expression (e.g “example\_method(uint64,uint64)void”).
* arg: specify args used for calling NoOp method, to be encoded within URI.
* box: Box references to be used in Application NoOp method call box array.
* asset: Asset reference to be used in Application NoOp method call foreign assets array.
* account: Account or nfd address to be used in Application NoOp method call foreign accounts array.
* fee: Optional. An optional static fee to set for the transaction in microAlgos.
* (others): optional, for future extensions
Note 1: If the fee is omitted , it means that Minimum Fee is preferred to be used for transaction.
### Template URI vs actionable URI
[Section titled “Template URI vs actionable URI”](#template-uri-vs-actionable-uri)
If the URI is constructed so that other dApps, wallets or protocols could use it with their runtime Algorand entities of interest, then :
* The placeholder account/app address in URI **MUST** be ZeroAddress (“AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ”). Since ZeroAddress cannot initiate any action this approach is considered non-vulnerable and secure.
### Example
[Section titled “Example”](#example)
Call claim(uint64,uint64)byte\[] method on contract 11111111 paying a fee of 10000 micro ALGO from an specific address
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&fee=10000
```
Call the same claim(uint64,uint64)byte\[] method on contract 11111111 paying a default 1000 micro algo fee
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&app=22222222&app=33333333
```
## Rationale
[Section titled “Rationale”](#rationale)
Algorand application NoOp method calls cover the majority of application transactions in Algorand and have a wide range of use-cases. For use-cases where the runtime knows exactly what the called application needs in terms of arguments and transaction arrays and there are no direct interactions, this extension will be required since ARC-26 standard does not currently support application calls.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
None.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# URI scheme blockchain information
> Querying blockchain information using a URI format
## Abstract
[Section titled “Abstract”](#abstract)
This URI specification defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that allow users to retrieve data such as application state and asset metadata in a structured format. This specification is inspired by [ARC-26](/arc-standards/arc-0026) and follows similar principles, with adjustments specific to read-only queries for applications and assets.
## Specification
[Section titled “Specification”](#specification)
### General Format
[Section titled “General Format”](#general-format)
Algorand URIs in this standard follow the general format for URIs as defined in [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). The scheme component specifies whether the URI is querying an application (`algorand://app`) or an asset (`algorand://asset`). Query parameters define the specific data fields being requested. Parameters may contain characters outside the valid range. These must first be encoded in UTF-8, then percent-encoded according to RFC 3986.
### Application Query URI (`algorand://app`)
[Section titled “Application Query URI (algorand://app)”](#application-query-uri-algorandapp)
The application URI allows querying the state of an application, including data from the application’s box storage, global storage, and local storage. And the teal program associated. Each storage type has specific requirements.
### Asset Query URI (`algorand://asset`)
[Section titled “Asset Query URI (algorand://asset)”](#asset-query-uri-algorandasset)
The asset URI enables retrieval of metadata and configuration details for a specific asset, such as its name, total supply, decimal precision, and associated addresses.
### ABNF Grammar
[Section titled “ABNF Grammar”](#abnf-grammar)
```abnf
algorandappurn = "algorand://app/" appid [ "?" noopparams ]
appid = *digit
noopparams = noopparam [ "&" noopparams ]
noopparam = [ boxparam / globalparam / localparam ]
boxparam = "box=" *qbase64url
globalparam = "global=" *qbase64url
localparam = "local=" *qbase64url "&algorandaddress=" *base32
tealcodeparam = "tealcode"
algorandasseturn = "algorand://asset/" assetid [ "?" assetparam ]
assetid = *digit
assetparam = [ totalparam / decimalsparam / frozenparam / unitnameparam / assetnameparam / urlparam / hashparam / managerparam / reserveparam / freezeparam / clawbackparam ]
totalparam = "total"
decimalsparam = "decimals"
frozenparam = "frozen"
unitnameparam = "unitname"
assetnameparam = "assetname"
urlparam = "url"
metadatahashparam = "metadatahash"
managerparam = "manager"
reserveparam = "reserve"
freezeparam = "freeze"
clawbackparam = "clawback"
```
### Parameter Definitions
[Section titled “Parameter Definitions”](#parameter-definitions)
#### Application Parameters
[Section titled “Application Parameters”](#application-parameters)
* **`boxparam`**: Queries the application’s box storage with a key encoded in `base64url`.
* **`globalparam`**: Queries the global storage of the application using a `base64url`-encoded key.
* **`localparam`**: Queries local storage for a specified account. Requires an additional `algorandaddress` parameter, representing the account whose local storage is queried.
#### Asset Parameters
[Section titled “Asset Parameters”](#asset-parameters)
* **`totalparam`** (`total`): Queries the total supply of the asset.
* **`decimalsparam`** (`decimals`): Queries the number of decimal places used for the asset.
* **`frozenparam`** (`frozen`): Queries whether the asset is frozen by default.
* **`unitnameparam`** (`unitname`): Queries the short name or unit symbol of the asset (e.g., “USDT”).
* **`assetnameparam`** (`assetname`): Queries the full name of the asset (e.g., “Tether”).
* **`urlparam`** (`url`): Queries the URL associated with the asset, providing more information.
* **`metadatahashparam`** (`metadatahash`): Queries the metadata hash associated with the asset.
* **`managerparam`** (`manager`): Queries the address of the asset manager.
* **`reserveparam`** (`reserve`): Queries the reserve address holding non-minted units of the asset.
* **`freezeparam`** (`freeze`): Queries the freeze address for the asset.
* **`clawbackparam`** (`clawback`): Queries the clawback address for the asset.
### Query Key Descriptions
[Section titled “Query Key Descriptions”](#query-key-descriptions)
For each parameter, the query key name is listed, followed by its purpose:
* **box**: Retrieves information from the specified box storage key.
* **global**: Retrieves data from the specified global storage key.
* **local**: Retrieves data from the specified local storage key. Requires `algorandaddress` to specify the account.
* **total**: Retrieves the asset’s total supply.
* **decimals**: Retrieves the number of decimal places for the asset.
* **frozen**: Retrieves the default frozen status of the asset.
* **unitname**: Retrieves the asset’s short name or symbol.
* **assetname**: Retrieves the full name of the asset.
* **url**: Retrieves the URL associated with the asset.
* **metadatahash**: Retrieves the metadata hash for the asset.
* **manager**: Retrieves the manager address of the asset.
* **reserve**: Retrieves the reserve address for the asset.
* **freeze**: Retrieves the freeze address of the asset.
* **clawback**: Retrieves the clawback address of the asset.
### Example URIs
[Section titled “Example URIs”](#example-uris)
1. **Querying an Application’s Box Storage**:
```plaintext
algorand://app/2345?box=YWxnb3JvbmQ=
```
Queries box storage with a `base64url`-encoded key.
2. **Querying Global Storage**:
```plaintext
algorand://app/12345?global=Z2xvYmFsX2tleQ==
```
Queries global storage with a `base64url`-encoded key.
3. **Querying Local Storage**:
```plaintext
algorand://app/12345?local=bG9jYWxfa2V5&algorandaddress=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567
```
Queries local storage with a `base64url`-encoded key and specifies the associated account.
4. **Querying Asset Details**:
```plaintext
algorand://asset/67890?total
```
Queries the total supply of an asset.
## Rationale
[Section titled “Rationale”](#rationale)
Previously, the Algorand URI scheme was primarily used to create transactions on the chain. This version allows using a URI scheme to directly retrieve information from the chain, specifically for applications and assets. This URI scheme provides a unified, standardized method for querying Algorand application and asset data, allowing interoperability across applications and services.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Since these URIs are intended for read-only operations, they do not alter application or asset state, mitigating many security risks. However, data retrieved from these URIs should be validated to ensure it meets user expectations and that any displayed data cannot be tampered with.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# xGov Council - Application Process
> How to run for an xGov Council seat.
## Abstract
[Section titled “Abstract”](#abstract)
The goal of this ARC is to clearly define the process for running for an xGov Council seat.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### How to apply
[Section titled “How to apply”](#how-to-apply)
In order to apply, a pull request needs to be created on the following repository: [xGov Council](https://github.com/algorandfoundation/xGov).
Candidates must explain why they are applying to become an xGov Council member, their motivation for participating in the review process, and how their involvement can contribute to the Algorand ecosystem.
* Follow the [Rules](https://github.com/algorandfoundation/xGov/blob/main/README.md) of the xGov Council Repository.
* Follow the [template form provided](https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0083/TemplateForm.md), complete all sections, and submit your application using the following file format: `Council/xgov_council-.md`.
#### Header Preamble
[Section titled “Header Preamble”](#header-preamble)
The `id` field is unique and incremented for each new submission. (The id should match the file name, for `id: 1`, the related file is `xgov_council-1.md`)
The `author` field must include the candidate’s full name and their GitHub username in parentheses.
> Example: Jane Doe (@janedoe)
The `email` field must include a valid email address where the candidate can be contacted regarding the KYC (Know Your Customer) process.
The `address` field represents an Algorand wallet address. This address will be used for verification or any token distribution if applicable.
The `status` field indicates the current status of the submission:
* `Draft`: In Pull request stage but not ready to be merged.
* `Final`: In Pull request stage and ready to be merged.
* `Elected`: The candidate has been elected.
* `Not Elected`: The candidate has not been selected.
### Timeline
[Section titled “Timeline”](#timeline)
* Applications will open 4-6 weeks before the election. A call for applications will be posted on the [Algorand Forum](https://forum.algorand.org/).
### xGov Council Duties and Powers
[Section titled “xGov Council Duties and Powers”](#xgov-council-duties-and-powers)
#### Eligibility Criteria
[Section titled “Eligibility Criteria”](#eligibility-criteria)
* Any Algorand holder, including xGovs, with Algorand technical expertise and/or a strong reputation can run for the council.
* Candidates must disclose their real name, have an identified Algorand address, and undergo the KYC process with the Algorand Foundation.
#### Duties
[Section titled “Duties”](#duties)
* Review and understand the terms and conditions of the program.
* Evaluate proposals to check compliance with terms and conditions, provide general guidance, and outline benefits or issues to help kick off the proposal discussion.
* Hold public discussions about the proposals review process above.
#### Powers
[Section titled “Powers”](#powers)
* Once a proposal passes, the xGov council can block it ONLY if it doesn’t comply with the terms and conditions.
* Expel fellow council members for misconduct by a supermajority vote of at least 85%.
* Also, by a majority vote, block fellow council members’ remuneration if they are not performing their duties.
## Rationale
[Section titled “Rationale”](#rationale)
The xGov Council is a fundamental component of the xGov Program, tasked with reviewing proposals. A structured, transparent application process ensures that only qualified and committed individuals are elected to the Council.
### Governance measures related to the xGov Council
[Section titled “Governance measures related to the xGov Council”](#governance-measures-related-to-the-xgov-council)
* [Governance Period 13](https://governance.algorand.foundation/governance-period-13/period-13-voting-session-1).
* [Governance Period 14](https://governance.algorand.foundation/governance-period-14/period-14-voting-session-1).
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
### Disclaimer jurisdictions and exclusions
[Section titled “Disclaimer jurisdictions and exclusions”](#disclaimer-jurisdictions-and-exclusions)
To be eligible to apply for the xGov council, the applicant must not be a resident of, or located in, the following jurisdictions: Cuba, Iran, North Korea and the Crimea, Donetsk, and Luhansk regions of Ukraine, Syria, Russia, and Belarus.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# xGov status and voting power
> xGov status and voting power for the Algorand Governance
## Abstract
[Section titled “Abstract”](#abstract)
This ARC defines the Expert Governor (xGov) status and voting power in the Algorand Expert Governance.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
The notation `(x, y)` denotes a pair of elements, while `(x; y)` (with `;`) denotes the interval of real numbers between `x` and `y` (including neither `x` nor `y`).
### xGov Registry
[Section titled “xGov Registry”](#xgov-registry)
The xGov Registry is the Application that manages the Algorand Expert Governance on the Algorand blockchain.
Let
* `g` be the Genesis Hash of the Algorand blockchain;
* `R` the xGov Registry Application ID;
* `Bc` the block number at which the xGov Registry `R` was created on `g`.
The xGov Registry is created by the Algorand Foundation and is identified by the tuple `(g, R, Bc)`.
> On the Algorand MainNet the xGov Registry is created by the Algorand Foundation address `I7OP7WFSK57IFDHJA6DM5TJC2IFY4M3XSBV4R4PVOV4YWF7K57BZFUVQ5E` and identified by:
>
> * `g`: `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=`
> * `R`: `3147789458`
> * `Bc`: `52307574`
### Governance Period
[Section titled “Governance Period”](#governance-period)
A *governance period* is identified by a pair `(Bi, Bf)` such that
* `Bi = 0 mod 1,000,000`;
* `Bf = 0 mod 1,000,000`;
* `Bf > Bi`;
* `Bf > Bc`.
And is intended as a range of blocks `[Bi; Bf)` (`Bi` included, `Bf` excluded).
> Note that `Bi < Bc` is valid and denotes a period across the xGov Registry creation.
### xGov Status
[Section titled “xGov Status”](#xgov-status)
xGovs (Expert Governors) are decision makers in the Algorand Expert Governance, who acquire voting power by securing the network and producing blocks.
These individuals can participate in the designation and approval of proposals submitted to the Algorand Expert Governance process.
An xGov is associated with an Algorand Address (`a`) subscribing to the Algorand Expert Governance by acknowledging the xGov Registry.
Once the xGov Registry confirms the acknowledgement on block `h` for the address `a`, it acquires an xGov status and is considered an xGov.
The xGov status **MAY** be revoked on block `r` from address `a`, either by themself (unsubscribing from the xGov Registry) or by the xGov Registry rules.
The xGov status `(a, h, k)` of an address `a` **SHOULD** be persisted on the xGov Registry state.
If the xGov status `(a, h, k)` is revoked (`k ≠ 0`) it **SHOULD NOT** be deleted from xGov Registry state.
### xGov Voting Power
[Section titled “xGov Voting Power”](#xgov-voting-power)
Given a governance period `(Bi, Bf)`, an xGov `(a, h, k)` is *eligible* to acquire voting power for that period if and only if:
* `h ∈ [Bc; Bf)` (xGov status **acknowledged** at before `Bf`), and
* `k = 0` or `k ≥ Bf` (xGov status **not revoked** at `Bf`), and
* `a` has proposed at least one block in `[Bi; Bf)`.
The *voting power* assigned to each xGov `(a, h, k)` is equal to the number of blocks proposed by its Algorand Address (`a`) over the governance period `[Bi; Bf)`.
> If an address `a` has acknowledged the xGov Registry at some `h ∈ [Bc; Bf)` and has proposed one or more blocks in `[Bi; Bf)`, then all such proposals in `[Bi; Bf)` contributes to its voting power, including those that occurred before `h`.
> The *eligibility* of address `a` holds for all the governance periods `(Bi, Bf)` such that `h ∈ [Bc; Bf)` and the xGov status is not revoked at `Bf` (i.e., `k = 0` or `k ≥ Bf`), with no need to reacknowledge the xGov Registry for each period.
### xGov Committee
[Section titled “xGov Committee”](#xgov-committee)
An xGov Committee is a group of *eligible* xGovs that have acquired voting power in a governance period.
Given the xGov Registry `(g, R, Bc)` and a governance period `(Bi, Bf)` as above, an *xGov Committee* for `(g, R, Bc, Bi, Bf)` is a finite set `C` of address–weight pairs `(a, v)` such that the following three conditions hold:
1. **Eligibility**: For all `(a, v)` in `C`, there exists an xGov status `(a, h, k)` such that:
* `h ∈ [Bc; Bf)`, and
* `k = 0` or `k ≥ Bf`, and
* `a` has proposed at least one block in `[Bi; Bf)`.
2. **Voting Power**: For all `(a, v)` in `C`, `v` is equal to the voting power of `a` in `[Bi; Bf)`;
3. **Uniqueness**: The addresses `a` in `C` are all distinct.
**Eligibility** at `Bf` **MUST** be evaluated on the xGov Registry state immediately after processing block `Bf-1` (i.e., the state at the end of block `Bf-1`).
An xGov Committee is defined by the tuple `(g, R, Bc, Bi, Bf, C)`.
If `C` is empty, then the xGov Committee for the governance period has no voting power.
#### xGov Committee Members
[Section titled “xGov Committee Members”](#xgov-committee-members)
The *number of xGov Committee members* `M` is the cardinality of `C`, more formally `M = |C|`.
#### xGov Committee Voting Power
[Section titled “xGov Committee Voting Power”](#xgov-committee-voting-power)
The *total voting power* of an xGov Committee `V` is the sum of votes (`v`) over all its members (`a`), more formally `V = Σ_{(a,v) ∈ C} v`.
### xGov Committee Selection Procedure
[Section titled “xGov Committee Selection Procedure”](#xgov-committee-selection-procedure)
The xGov Committee selection is repeated periodically to select new xGov Committees over time.
To build the xGov Committee `(g, R, Bc, Bi, Bf, C)`, the selection is executed with the following procedure:
1. Collect all proposed blocks in the governance period `[Bi; Bf)` to build the *potential committee* set `P` (note that not all the Block Proposers hold the xGov status).
2. For each Block Proposer address (`a`) in `P`, assign a voting power (`v`) equal to the number of blocks proposed in the governance period `[Bi; Bf)`.
3. Determine the set of xGov statuses `(a, h, k)` that are *eligible* at `Bf`, i.e. those such that:
* `h ∈ [Bc; Bf)`, and
* `r = 0` or r `≥ Bf`.
(**OPTIONAL**) If the xGov Registry state does not persist sufficient information to determine `(a, h, k)` at `Bf` from a state snapshot, replay the xGov Registry state transitions up to `Bf` to reconstruct xGov statuses at `Bf`.
4. Collect all the *eligible* xGovs in the governance period `[Bc; Bf)` to build the *eligible xGovs* set `E(Bi, Bf)`.
5. Filter `P ∩ E` to obtain the *xGov Committee* `C`.
> The Committee for the governance period `(Bi, Bf)` is a pure function of:
>
> * The blocks history up to `Bf`;
> * The fixed registry identity `(g,R,Bc)`;
>
> And it does not depend on the time at which the Committee is elected.
### Representation
[Section titled “Representation”](#representation)
The xGov Committee **MUST** be represented with the canonical UTF-8 encoded JSON object with the following schema:
```json
{
"title": "xGov Committee",
"description": "Selected xGov Committee with voting power and validity",
"type": "object",
"properties": {
"xGovs": {
"description": "xGovs with voting power, sorted lexicographically with respect to addresses",
"type": "array",
"items": {
"type": "object",
"properties": {
"address": {
"description": "xGov address used on xGov Registry in base32",
"type": "string"
},
"votes": {
"description": "xGov voting power",
"type": "integer",
"minimum": 1
}
},
"required": ["address", "votes"]
},
"uniqueItems": true
},
"periodStart": {
"description": "First block of the Committee selection period, must ≡ 0 mod 1,000,000",
"type": "integer",
"multipleOf": 1000000
},
"periodEnd": {
"description": "Last block of the Committee selection period, must ≡ 0 mod 1,000,000 and greater than periodStart",
"type": "integer",
"multipleOf": 1000000
},
"totalMembers": {
"description": "Total number of Committee members",
"type": "integer"
},
"networkGenesisHash": {
"description": "The genesis hash of the network in base64",
"type": "string"
},
"registryId": {
"description": "xGov Registry application ID",
"type": "integer"
},
"totalVotes": {
"description": "Total number of Committee votes",
"type": "integer"
}
},
"required": ["networkGenesisHash", "periodEnd", "periodStart", "registryId", "totalMembers", "totalVotes", "xGovs"],
"additionalProperties": false
}
```
For a valid xGov Committee JSON object:
* The number of entries in the xGovs array **MUST** equal `totalMembers`.
* The sum of the vote fields of all xGovs entries **MUST** equal `totalVotes`.
* All address values in the xGovs array **MUST** be distinct.
The following rules aim to create a deterministic outcome of the committee file and its resulting hash.
The object keys **MUST** be sorted in lexicographical order.
The xGovs arrays **MUST** be sorted in lexicographical order with respect to the *unique* address keys.
The canonical representation of the committee object **MUST NOT** include decorative white-space (pretty printing) or a trailing newline.
An xGov Committee is identified by the following identifier:
`SHA-512/256(arc0086||SHA-512/256(xGov Committee JSON))`
The ASCII string `"arc0086"` **MUST** be encoded as the UTF-8 byte sequence `0x61 0x72 0x63 0x30 0x30 0x38 0x36`.
### Trust Model
[Section titled “Trust Model”](#trust-model)
The Algorand Foundation is responsible for executing the Committee selection algorithm described above and publishing the resulting Committee ID on the xGov Registry.
The correctness of the process is auditable post-facto via:
* The block proposers’ history (on-chain)
* The xGov Registry history and state (on-chain)
* The published Committee JSON (hash verifiable)
Any actor can recompute and verify the selected committee independently from on-chain data.
Clients **SHOULD** use a trusted provider for both the block proposer history and the xGov Registry state.
## Rationale
[Section titled “Rationale”](#rationale)
Given the shift of the Algorand protocol towards consensus incentivization, the xGov process could be an additional way to push consensus participation.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
Recomputing the xGov Committee requires access to block proposer history for the entire governance period `[Bi; Bf)` and to the xGov Registry state.
Implementations **MUST** ensure that this historical data remains available (for example, via archival nodes or indexer services), or document any assumptions about third-party infrastructure.
Clients **SHOULD** notify the Algorand Foundation if:
* The xGov Committee for period `[Bi; Bf)` is not published by the Algorand Foundation within `10,000` blocks of the end of the period.
* A published Committee ID does not match any recomputed xGov Committee using the agreed `(g, R, Bc, Bi, Bf, C)`.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Key Name Specification
> A system for addressable values
## Abstract
[Section titled “Abstract”](#abstract)
Adopt a standard key name specification for complex data.
This defines key names that can be used to represent JSON, Blobs, or other structures that do not fit neatly into the state
## Motivation
[Section titled “Motivation”](#motivation)
This pattern has emerged over time as a way to circumvent constraints with state storage. This seeks to codify the practice into a shared definition which can be leveraged as a primitive in the ecosystem.
This greatly simplifies the cross-cutting concerns when integrating with complex structures by directly addressing values on-chain.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
> All bullet points are in reference to Key Names
* **SHOULD** be prefixed with `o_` (for discovery/indexing)
* **MUST** separate nested object keys with `.`
* **MUST** index collections with `[N]`
* **SHALL** be escaped by starting with `${` and ending in `}`
## Rationale
[Section titled “Rationale”](#rationale)
Multiple variants of this pattern create downstream cycles which could be avoided.
### JSON/Objects
[Section titled “JSON/Objects”](#jsonobjects)
Given the following object:
```json
{
"alice": "APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE",
"bob": "BX2RWWE77PA7JNNIPWUBQYX44LDHDE6EBRFEPLJUOMBNLT4ATQ3SA7UGEQ",
"metadata": {
"rp": "algorand.co"
}
}
```
Represents the following Key/Value pairs:
| key | value |
| -------------- | ---------------------------------------------------------- |
| o\_alice | APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE |
| o\_bob | BX2RWWE77PA7JNNIPWUBQYX44LDHDE6EBRFEPLJUOMBNLT4ATQ3SA7UGEQ |
| o\_metadata.rp | algorand.co |
### Blob/File
[Section titled “Blob/File”](#blobfile)
Assuming the blob is greater than the state storage, chunking is required and can be represented in an object
```json
{
"index": 2,
"mime": "text/plain",
"blobs": [
"...",
"..."
]
}
```
Would produce the following keys
| key | value |
| ------------ | ---------- |
| o\_index | 2 |
| o\_mime | text/plain |
| o\_blobs\[0] | … |
| o\_blobs\[1] | … |
This is only illustrative of the value size constraints, a dedicated specification would be more robust for bespoke Objects. This is out of scope for this key name specification.
### Templatization
[Section titled “Templatization”](#templatization)
Assuming the key names are greater than 64 bytes, mapping of the names to values is required.
```json
{
"this is a really long key that for some reason is extra long even though it probably doesn't need to be this long but idk maybe someone has a key this long": "data for super long key"
}
```
Would produce the following keys
| key | value |
| ------------------- | ----------------------- |
| o\_${path.to.value} | data for super long key |
This is only illustrative of the key size constraints, a dedicated specification would be more robust for applying templates. This is out of scope for this key name specification.
### Encoding/[ARC-4](/arc-standards/arc-0004) Containers
[Section titled “Encoding/ARC-4 Containers”](#encodingarc-4-containers)
Assuming the values are encoded, further processing is required with knowledge of the types
```json
{
"APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE": [0,1,2,3,...]
}
```
Given the value type
```typescript
class PackedValue extends Struct<{
a: uint64
b: uint64
}> {}
```
Would be represented as the following object
```json
{
"APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE": {
"a": 1234,
"b": 1234
}
}
```
And would produce the following keys
| key | value |
| ------------------------------------------------------------- | ----- |
| o\_APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE | bytes |
This is only illustrative of the current encoding practices, a mapping of ARC-4 Containers to key paths could be done at a future date. This is out of scope for this key name specification.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
All backwards compatibility must be done with an Adapter.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
* See [StoreKit](https://storekit.io)
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
* This does not account for private data
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# ASA Metadata Registry
> Singleton Application providing ASA metadata via Algod API or the AVM
## Abstract
[Section titled “Abstract”](#abstract)
This ARC defines the interface and the implementation of a singleton Application that provides Algorand Standard Assets metadata through the Algod API or the AVM.
## Motivation
[Section titled “Motivation”](#motivation)
Algorand Standard Assets (ASA) lack a dedicated metadata field on the Algorand ledger for storing additional asset information.
Although it’s generally not advisable to use Algorand as a distributed storage system for data that could easily reside elsewhere, the absence of a native metadata store on the ledger has led the ecosystem to adopt less-than-ideal solutions for discovering and fetching off-chain asset data, involving the usage of an Indexer or external infrastructure (such as IPFS), or hacking on the ASA RBAC roles to get asset metadata mutability.
While storing huge data, such as images, off-chain is a practical (and recommended) approach, smaller, more pertinent data should not incur the expenses, availability challenges, and latency typically associated with external infrastructure.
This ARC establishes a standardized URI within the ASA URL field to solve this simple use case: *directly retrieving ASA metadata using the Algod API or the AVM*.
## Specification
[Section titled “Specification”](#specification)
The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
The data types (like `uint64`, `byte[]`, etc.) in this document are to be interpreted as specified in [ARC-4](/arc-standards/arc-0004#types).
> Notes like this are non-normative.
### ASA Metadata Registry
[Section titled “ASA Metadata Registry”](#asa-metadata-registry)
The ASA Metadata Registry is an *immutable singleton* Application that stores *mutable* or *immutable* Asset Metadata.
The trusted deployments of ASA Metadata Registry are:
| NETWORK | GENESIS HASH (`base64`) | APP ID | CREATOR ADDRESS |
| :------- | :--------------------------------------------: | :---------: | :----------------------------------------------------------- |
| Main Net | `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=` | `TBD` | `XODGWLOMKUPTGL3ZV53H3GZZWMCTJVQ5B2BZICFD3STSLA2LPSH6V6RW3I` |
| Test Net | `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=` | `753324084` | `QYK5DXJ27Y7WIWUJMP3FFOTEU56L4KTRP4CY2GAKRXZHHKLNWV6M7JLYJM` |
> Refer to the [AppSpec section](#arc-56-appspec) for the detailed [ARC-56](/arc-standards/arc-0056) Application Specification of the singleton reference implementation.
The initial Minimum Balance Requirement (MBR) for the ASA Metadata Registry Application Account **SHOULD** be provided *before* enabling the creation of any Asset Metadata.
Once deployed, the ASA Metadata Registry **MUST NOT** be updated.
#### Asset Metadata Box
[Section titled “Asset Metadata Box”](#asset-metadata-box)
The ASA Metadata, along with some ancillary information, are stored in a dedicated Box of the ASA Metadata Registry, called *Asset Metadata Box*.
There **MUST** be at most one Asset Metadata Box per ASA.
The Asset Metadata Box Name **MUST** be equal to the raw 8-byte big-endian encoding of the *Asset ID* (`uint64`) (`ASSET_METADATA_BOX_KEY_SIZE = 8` bytes).
The Asset Metadata Box Value **MUST** be defined as follows:
| FIELD | SCOPE | IN METADATA HASH | TYPE | BYTE OFFSET | BYTE SIZE |
| :-------------------------------------------- | :----- | :--------------: | :--------: | :---------: | :-----------------------: |
| [Metadata Identifiers](#metadata-identifiers) | Header | Yes | `byte` | `0` | `1` |
| [Reversible Flags](#reversible-flags) | Header | Yes | `byte` | `1` | `1` |
| [Irreversible Flags](#irreversible-flags) | Header | Yes | `byte` | `2` | `1` |
| [Metadata Hash](#metadata-hash) | Header | No (Recursive) | `byte[32]` | `3` | `32` |
| [Last Modified Round](#last-modified-round) | Header | No | `uint64` | `35` | `8` |
| [Deprecated By](#deprecated-by) | Header | No | `uint64` | `43` | `8` |
| [Metadata](#metadata) | Body | Yes | `byte[]` | `51` | up to `MAX_METADATA_SIZE` |
> See the [Metadata section](#metadata) for more details about the Metadata encoding and size limits.
#### Metadata Header
[Section titled “Metadata Header”](#metadata-header)
The Metadata Header is a byte-array of fixed length (`HEADER_SIZE`), encoding ancillary attributes of the Asset Metadata.
The `HEADER_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that is equal to the sum of the Header fields byte sizes (`51` bytes).
The maximum `HEADER_SIZE` depends on:
* The AVM size of the `log` opcode `MAX_LOG_SIZE` (`1024` bytes);
* The [ARC-4](/arc-standards/arc-0004) return prefix (`151f7c75`) size `ARC4_RETURN_PREFIX_SIZE` (`4` bytes);
Therefore, `HEADER_SIZE ≤ MAX_LOG_SIZE - ARC4_RETURN_PREFIX_SIZE = 1020` bytes.
##### Metadata Identifiers
[Section titled “Metadata Identifiers”](#metadata-identifiers)
The Metadata Identifiers (`byte`) are a set of boolean switches set by the ASA Metadata Registry.
The Metadata Identifiers are defined as follows:
| BIT | DESCRIPTION | DEFAULT | STATE TRANSITION |
| :---: | :-------------------------- | :-----: | :--------------- |
| `LSB` | Not used | - | - |
| `1` | Not used | - | - |
| `2` | Not used | - | - |
| `3` | Not used | - | - |
| `4` | Not used | - | - |
| `5` | Not used | - | - |
| `6` | Not used | - | - |
| `MSB` | [Short Metadata](#metadata) | `False` | Two-ways |
The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box.
The Metadata Identifiers **SHALL NOT** be updated if the Metadata is [*immutable*](#metadata-immutability).
###### Short Metadata
[Section titled “Short Metadata”](#short-metadata)
The Metadata **MAY** be identified as *short* on creation or after, by setting the `MSB` in the Metadata Identifier to `True`.
The *short* Metadata identifier is derived by the `metadata_size`. It is set to `True` if and only if `metadata_size ≤ SHORT_METADATA_SIZE`, and `False` otherwise. Its value **MAY** change on update.
Clients **MUST NOT** assume shortness identifier persists across updates, since the Metadata size is not guaranteed to be constant (if not [*immutable*](#metadata-immutability)).
> If the Metadata is identified as *short*, clients are aware that all AVM opcodes can operate directly on the whole Metadata, example: decoding (`json_ref`, `base64_decode`), cryptography (`sha256`, `keccak256`, `sha512_256`, `sha3_256`), byte manipulations, etc.
> For further details on identification rules, refer to the [Metadata section](#metadata).
##### Metadata Flags
[Section titled “Metadata Flags”](#metadata-flags)
The Metadata Flags (**reversible** and **irreversible**) are two *distinct* sets of boolean switches set by the ASA Manager Address.
* **Reversible Flags**: are *two-ways* switches, they can be set (to `True`) or unset (to `False`). For further details, see the [Reversible Flags section](#reversible-flags).
* **Irreversible Flags**: are *one-way* switches, they can be set (to `True`). For further details, see the [Irreversible Flags section](#irreversible-flags)).
> Metadata Flags can be used for bitwise operations with a bitmask.
The Metadata Flags **MAY** set by the ASA Manager Address **on creation** or **later**.
The Metadata Flags **SHALL NOT** be updated if the Metadata is [*immutable*](#metadata-immutability).
###### Reversible Flags
[Section titled “Reversible Flags”](#reversible-flags)
The Reversible Flags (`byte`) are defined as follows:
| BIT | DESCRIPTION | DEFAULT | SET TIME |
| :---: | :--------------------------------------------------- | :-----: | :------- |
| `LSB` | [ARC-20](/arc-standards/arc-0020) Smart ASA | `False` | Any |
| `1` | [ARC-62](/arc-standards/arc-0062) Circulating Supply | `False` | Any |
| `2` | Native Token Transfers (NTT) supported | `False` | Any |
| `3` | Custom, should be reserved for future ARCs | `False` | Any |
| `4` | Custom, should be reserved for future ARCs | `False` | Any |
| `5` | Custom, should be reserved for future ARCs | `False` | Any |
| `6` | Custom, should be reserved for future ARCs | `False` | Any |
| `MSB` | Custom, should be reserved for future ARCs | `False` | Any |
The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box.
The bits `2 ... MSB` are reserved for future ARCs (default `False` if not used).
An ASA **MAY** be declared to be an [ARC-20](/arc-standards/arc-0020) Smart ASA on creation or after, setting the `LSB` in the Reversible Flags to `True`.
If the ASA is declared to be an [ARC-20](/arc-standards/arc-0020) Smart ASA:
* The ASA **MUST** conform with the [ARC-3](#arc-3-compliance) specification, and
* The Metadata **MUST** be used for the ASA Controlling Application discovery, conforming to the [ARC-20](/arc-standards/arc-0020#specifying-the-controlling-smart-contract) specification.
The [ARC-62](/arc-standards/arc-0062) ASA Circulating Supply **MAY** be enabled on creation or after, setting the bit `1` in the Reversible Flags to `True`.
If the [ARC-62](/arc-standards/arc-0062) support is *enabled*:
* The ASA **MUST** conform with the [ARC-3](#arc-3-compliance) specification, and
* The Metadata **MUST** be used for the ASA Circulating Supply Application discovery, conforming to the [ARC-62](/arc-standards/arc-0062#circulating-supply-application-discovery) specification.
An ASA **MAY** declare to support [Native Token Transfers (NTT)](https://wormhole.com/docs/products/token-transfers/native-token-transfers/overview/) on creation or after, setting the `2` bit in the Reversible Flags to `True`.
###### Irreversible Flags
[Section titled “Irreversible Flags”](#irreversible-flags)
The Irreversible Flags (`byte`) are defined as follows:
| BIT | DESCRIPTION | DEFAULT | SET TIME |
| :---: | :---------------------------------------------- | :-----: | :------------------- |
| `LSB` | [ARC-3](/arc-standards/arc-0003) Compliant | `False` | At metadata creation |
| `1` | [ARC-89](/arc-standards/arc-0089) Native ASA | `False` | At metadata creation |
| `2` | [ARC-54](/arc-standards/arc-0054) Burnable ASA | `False` | Any |
| `3` | Custom, should be reserved for future ARCs | `False` | Any |
| `4` | Custom, should be reserved for future ARCs | `False` | Any |
| `5` | Custom, should be reserved for future ARCs | `False` | Any |
| `6` | Custom, should be reserved for future ARCs | `False` | Any |
| `MSB` | [Metadata Immutability](#metadata-immutability) | `False` | Any |
The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box.
The bits `3 ... 6` are reserved for future ARCs (default `False` if not used).
The Metadata **MAY** be declared as [ARC-3](/arc-standards/arc-0003) *compliant* on creation, setting the bit `LSB` in the Irreversible Flags to `True`.
The ASA **MAY** be declared as a *native* [ARC-89](/arc-standards/arc-0089) ASA on creation, setting the bit `1` in the Irreversible Flags to `True`.
The ASA **MAY** be declared as a *burnable* [ARC-54](/arc-standards/arc-0054) ASA on creation or after, setting the bit `2` in the Irreversible Flags to `True`, if it ASA has no Clawback Address.
###### Metadata Immutability
[Section titled “Metadata Immutability”](#metadata-immutability)
The Metadata **MAY** be declared as *immutable* on creation or after, setting the `MSB` in the Irreversible Flags to `True`.
> ⚠️ WARNING: If the ASA Manager Address is set to the Zero Address, this implies that the ASA is effectively *immutable*, regardless of the Metadata Immutability flag (`MSB`) setting.
##### Metadata Hash
[Section titled “Metadata Hash”](#metadata-hash)
The Metadata Hash (`byte[32]`) is a 256-bit hash computed as defined in the [Metadata Hash Computation section](#metadata-hash-computation).
The Metadata Hash **MUST** be set on Asset Metadata creation.
If the Asset Metadata is not [*immutable*](#metadata-immutability), the Metadata Hash **MUST** be updated on any modification of either:
* Metadata Identifiers, or
* Metadata Flags, or
* Metadata (body).
##### Last Modified Round
[Section titled “Last Modified Round”](#last-modified-round)
The Last Modified Round (`uint64`) records the block in which the Metadata Header or the Metadata was last modified (or created).
If the Asset Metadata is not [*immutable*](#metadata-immutability), the Last Modified Round **MUST** be updated on any modification of either:
* Metadata Identifiers, or
* Metadata Flags, or
* Metadata (body).
> The Last Modified Round is guaranteed to be monotonically increasing.
#### Deprecated By
[Section titled “Deprecated By”](#deprecated-by)
The Deprecated By (`uint64`) is the Application ID of the new ASA Metadata Registry version.
The Deprecated By field **MUST** be set to `0` if the ASA Metadata Registry is not deprecated.
The ASA Manager Address **MAY** migrate *mutable* metadata to a new ASA Metadata Registry version by setting the Deprecated By field to the Application ID of the new ASA Metadata Registry version.
*Immutable* metadata **MUST NOT** be migrated.
#### Metadata
[Section titled “Metadata”](#metadata)
The Metadata (`byte[]`) is a byte-array of variable length (`metadata_size`).
The `metadata_size` **MAY** be `0`, representing *empty* Metadata. In this case, the Metadata Body is the empty byte string (and `total_pages = 0`, see [Pagination](#pagination)).
The Metadata Header still exists and can be retrieved by clients.
The `MAX_METADATA_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that depends on:
* The maximum byte size of an AVM Box `MAX_BOX_SIZE` (`32768` bytes);
* The maximum Application Call arguments size `MAX_ARG_SIZE` (`2048` bytes);
* The maximum number of transaction per Group `MAX_GROUP_SIZE` (`16`);
* The `HEADER_SIZE`;
* The [ARC-4](/arc-standards/arc-0004) method selector size `ARC4_METHOD_SELECTOR_SIZE` (`4` bytes);
* The available payload for the method `arc89_create_metadata(uint64,byte,byte,uint16,byte[],pay)` (`FIRST_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 1 + 1 + 2 + 2 + 0) = 2030` bytes), which consumes an extra `pay` transaction in the Group (the `pay` transaction is not encoded as argument bytes, hence the `+ 0` in the formula);
* The available payload for the method `arc89_extra_payload(uint64,byte[])` (`EXTRA_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 2 = 2034)` bytes);
> The `MAX_METADATA_SIZE` is not constrained by the first head payload for the methods `arc89_replace_metadata(...)` and `arc89_replace_metadata_larger(...)` since they are larger than the one of `arc89_create_metadata(...)`.
> Refer to the [ARC-4 Interface](#arc-4-interface) section for details about the method signatures.
Therefore, `MAX_METADATA_SIZE = FIRST_PAYLOAD_MAX_SIZE + 14 * EXTRA_PAYLOAD_MAX_SIZE = 30506` bytes.
The condition `MAX_METADATA_SIZE ≤ MAX_BOX_SIZE - HEADER_SIZE` **MUST** hold.
The `metadata_size` **MUST** hold the condition: `metadata_size ≤ MAX_METADATA_SIZE`.
The `SHORT_METADATA_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that is equal to the AVM Stack length (`4096` bytes).
If the `metadata_size ≤ SHORT_METADATA_SIZE`, it **MUST** be declared as [*short*](#short-metadata).
The Metadata **MUST NOT** be updated if [*immutable*](#metadata-immutability).
> The available payload for the method `arc89_replace_metadata_slice(uint64,uint16,byte[])` is `REPLACE_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 2 + 2 = 2032)` bytes.
##### Encoding
[Section titled “Encoding”](#encoding)
The Metadata **MUST** be a sequence of bytes representing a valid UTF-8 encoded JSON *object*, as defined in [RFC 8259](https://datatracker.ietf.org/doc/html/rfc8259), without Byte Order Mark (BOM).
If Metadata is *empty* (`metadata_size == 0`), clients **MUST** treat it as an empty JSON object for parsing purposes.
If the Metadata is a valid JSON object, it **SHOULD** conform to the [*ARC-3 JSON Metadata File Schema*](#arc-3-compliance). This is the **RECOMMENDED** schema for maximum interoperability with the ecosystem (e.g., explorers, wallets, etc.).
##### Pagination
[Section titled “Pagination”](#pagination)
A Metadata Page is a byte-array of variable length (`page_size`) that contains a portion of (or the entire) Metadata.
The `PAGE_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that depends on:
* The AVM size of the `log` opcode `MAX_LOG_SIZE` (`1024` bytes);
* The [ARC-4](/arc-standards/arc-0004) return prefix (`151f7c75`) size `ARC4_RETURN_PREFIX_SIZE` (`4` bytes);
* The maximum [ARC-4](/arc-standards/arc-0004) *return type* encoding overhead bytes depends on the [Get Metadata interface](#get-metadata)) return type `(bool,uint64,byte[])`. ABI tuple are encoded as `head(...) || tail(...)`.
Therefore, `PAGE_SIZE = MAX_LOG_SIZE - ARC4_RETURN_PREFIX_SIZE - (1 + 8 + 2 + 2) = 1007` bytes.
The `page_size` **MUST** hold the condition `page_size ≤ PAGE_SIZE`.
A `page` **MUST** be identified by a 0-based index (`uint8`) from the head of the Metadata.
Page `p` covers the byte range `[p*PAGE_SIZE, min((p+1)*PAGE_SIZE, metadata_size))`. The final `page` **MAY** be shorter; all intermediate pages **SHOULD** have `page_size = PAGE_SIZE`.
> A `uint8` is enough as a `page` index since `ceil(MAX_METADATA_SIZE/PAGE_SIZE) = 31`; `0` pages are allowed (i.e., empty Metadata).
> Empty Metadata: when `total_pages == 0`, there are no Metadata Pages for hashing purposes; however, the [Get Metadata](#get-metadata) method accepts `page = 0` and return an empty page (and `has_next_page = False`) as a convenience (any `page != 0` fails).
##### MBR Delta
[Section titled “MBR Delta”](#mbr-delta)
The *MBR Delta* is the variation of the ASA Metadata Registry Application Account MBR due to the creation, update, or deletion of the Asset Metadata Box.
It is a tuple of two elements, encoding:
* The *sign* (`uint8`) enum:
| ENUM | VALUE | DESCRIPTION |
| :----- | :---: | :---------- |
| `NULL` | `0` | Null |
| `POS` | `1` | Positive |
| `NEG` | `255` | Negative |
* The *amount* (`uint64`) of MBR, expressed in microALGO.
The MBR Delta is calculated based on the following contextual information:
* The *existence* of the Asset Metadata Box for the ASA and,
* The relative byte sizes (`delta_size`) between a *new* Metadata (`new_metadata_size`) and the *existing* Metadata (`metadata_size`, if any).
#### Metadata Hash Computation
[Section titled “Metadata Hash Computation”](#metadata-hash-computation)
If the Asset Metadata Hash (`am`) field of the ASA is set (i.e., not zero), then:
* It takes precedence over the hash computation, and it is copied verbatim as Metadata Hash, and
* The Asset Metadata **MUST** be flagged as [*immutable*](#metadata-immutability) at creation, and
* The ASA Metadata Registry **SHALL** validate it (according to the following specification) if the Asset Metadata is flagged as [ARC-89 Native ASA](#irreversible-flags) and not as [ARC-3 compliant](#irreversible-flags).
> Refer to the [Asset Metadata Hash](#asset-metadata-hash) section for details about the *Asset Metadata Hash* (`am`) field.
Otherwise, the Metadata Hash is computed as follows:
1. Compute the Metadata Header Hash (`hh`):
```plain
hh = SHA-512/256("arc0089/header" || Asset ID || Metadata Identifiers || Reversible Flags || Irreversible Flags || Metadata Size)
```
1. If `total_pages > 0`, compute the Page Hashes (`ph[i]`) for each Metadata Page (`i = 0 ... total_pages - 1`):
```plain
ph[i] = SHA-512/256("arc0089/page" || Asset ID || Page Index || Page Size || Page Content)
```
1. If `total_pages > 0`, compute the Asset Metadata Hash (`am`) as:
```plain
am = SHA-512/256("arc0089/am" || hh || ph[0] || ph[1] || ... || ph[total_pages - 1])
```
otherwise, if `total_pages == 0`, compute the Asset Metadata Hash (`am`) as:
```plain
am = SHA-512/256("arc0089/am" || hh)
```
Where:
* `||` denotes concatenation,
* `Asset ID` is the 8-byte encoding of the Asset ID (`uint64`), serialized in network byte order (big-endian);
* `Metadata Identifiers` is the 1-byte encoding of the [Metadata Identifiers](#metadata-identifiers) (`byte`);
* `Reversible Flags` is the 1-byte encoding of the [Reversible Flags](#reversible-flags) (`byte`);
* `Irreversible Flags` is the 1-byte encoding of the [Irreversible Flags](#irreversible-flags) (`byte`);
* `Metadata Size` is the 2-byte encoding of the Metadata Size (`uint16`), serialized in network byte order (big-endian);
* `Page Index` is the 1-byte encoding of the 0-based Metadata Page Index (`uint8`);
* `Page Size` is 2-byte encoding of the i-th Page byte size (`uint16`), serialized in network byte order (big-endian);
* `Page Content` are the *exact raw bytes* content of the i-th Metadata Page, unpadded if `len(page) < PAGE_SIZE`;
* `SHA-512/256` is defined in [NIST FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4).
Hash components **MUST NOT** be reinterpreted as a signed integer, bitset string, or multibyte integer prior to hashing.
> ⚠️ The Last Modified Round and the Deprecated By fields are **NOT** included in the Metadata Hash computation.
### ASA Creation
[Section titled “ASA Creation”](#asa-creation)
Care has to be taken when creating an [ARC-89 Native ASA](#irreversible-flags), specifically:
* The *Asset URL* (`au`) field is defined at ASA creation time, and it is *immutable*,
* The *Asset Metadata Hash* (`am`) field is defined at ASA creation time, and it is *immutable*,
* The ASA Manager Address **MUST NOT** be set to the Zero Address on creation.
#### Asset URL
[Section titled “Asset URL”](#asset-url)
The *Asset URL* (`au`) field is used as a *partial* URI pointing to the Asset Metadata on the Algorand ledger.
The *Asset URL* (`au`) **MUST** begin with the *partial* [ARC-90](/arc-standards/arc-0090) URI:
`algorand:///app/?box=`
and **MAY** declare the [ARC-90 compliance fragment](./arc-0090#compliance-fragment) at the end of the *partial* URI:
`algorand:///app/?box=#arc++...`
where ``, ``, ``, etc. are the ARC numbers of the compliance fragments.
The [Native ARC-89 ASA Flag](#arc-89-native-asa-creation) **MUST** be set to `True`.
Clients **MUST** resolve the *partial* Asset URL (`au`) to a *complete* [Asset Metadata URI](#asset-metadata-uri) before using it.
> Refer to the [Asset Metadata URI section](#asset-metadata-uri) for details about the *complete* [ARC-90](/arc-standards/arc-0090) *Asset Metadata URI*.
#### Asset Metadata Hash
[Section titled “Asset Metadata Hash”](#asset-metadata-hash)
The *Asset Metadata Hash* (`am`) field is used as hash-lock invariant on ASA creation.
The ASA Creator **SHOULD** compute the *Asset Metadata Hash* (`am`) field as specified by:
* The [ARC-3 Parameters Convention](/arc-standards/arc-0003#asa-parameters-conventions), if the ASA is [ARC-3 compliant](#arc-3-compliance),
* Otherwise, the [Metadata Hash Computation](#metadata-hash-computation).
> Since the Metadata Identifiers are set by the ASA Metadata Registry on creation, the ASA Creator needs to pre-identify the ASA based on the creation parameters, specifically:
>
> * If the Metadata size at creation time is less than or equal to `SHORT_METADATA_SIZE`.
#### [ARC-3](/arc-standards/arc-0003) Compliance
[Section titled “ARC-3 Compliance”](#arc-3-compliance)
The compliance with [ARC-3](/arc-standards/arc-0003) is **OPTIONAL** but **RECOMMENDED** to maximize interoperability with the ecosystem.
If the ASA conforms to [ARC-3](/arc-standards/arc-0003), then:
* The ASA **MUST** comply with the [*ARC-3 ASA Parameters Conventions*](/arc-standards/arc-0003#asa-parameters-conventions) for the *Asset Name* (`an`) and the *Asset URL* (`au`) fields.
* It is **RECOMMENDED** to use the *Asset URL* (`au`) suffix option, in this case the *partial* [ARC-90](/arc-standards/arc-0090) URI would be: `algorand:///app/?box=#arc3`
* The ASA **MUST** comply with the [*ARC-3 ASA Parameters Conventions*](/arc-standards/arc-0003#asa-parameters-conventions) for the *Asset Metadata Hash* (`am`) field if the Asset Metadata are set as [*immutable*](#metadata-immutability) at creation, otherwise the *Asset Metadata Hash* (`am`) field **MUST NOT** be set (i.e., set to zero).
* The Asset Metadata **MUST** comply with the [*ARC-3 JSON Metadata File Schema*](/arc-standards/arc-0003#json-metadata-file-schema).
* The [ARC-3 Compliant Flag](#irreversible-flags) **MUST** be set to `True`.
> Refer to the [Asset Metadata URI section](#asset-metadata-uri) for details about the *complete* [ARC-90](/arc-standards/arc-0090) *Asset Metadata URI*.
> The ASA Metadata Registry does not enforce *Asset Metadata Hash* (`am`) validation for [ARC-3](/arc-standards/arc-0003) ASA.
#### Creation Process
[Section titled “Creation Process”](#creation-process)
Two **RECOMMENDED** creation processes are provided.
##### ARC-89 Native ASA Creation
[Section titled “ARC-89 Native ASA Creation”](#arc-89-native-asa-creation)
The **RECOMMENDED** creation process for an [ARC-89](/arc-standards/arc-0089) *native* ASA:
1. The ASA Creator Address defines the [Metadata Flags](#metadata-flags) and the [Metadata](#metadata),
2. The ASA Creator Address creates an ASA as follows:
* The *Asset URL* (`au`) field is set to `algorand:///app/?box=#arc89`,
* If the Asset Metadata is [*immutable*](#metadata-immutability), the *Asset Metadata Hash* (`am`) field is computed according to the [Metadata Hash Computation](#metadata-hash-computation) using the [Metadata Identifiers](#metadata-identifiers), the defined [Metadata Flags](#metadata-flags) and the Metadata (raw bytes),
* The ASA Manager Address is *not* set to the Zero Address.
3. The ASA Manager Address creates the Asset Metadata on the ASA Metadata Registry, using the defined Metadata Flags and Metadata.
##### ARC-89 Native ASA Creation with ARC-3 Compliant Metadata
[Section titled “ARC-89 Native ASA Creation with ARC-3 Compliant Metadata”](#arc-89-native-asa-creation-with-arc-3-compliant-metadata)
The **RECOMMENDED** creation process for an [ARC-89](/arc-standards/arc-0089) *native* ASA with [ARC-3 compliant](#irreversible-flags) Metadata is:
1. The ASA Creator Address defines the [Metadata Flags](#metadata-flags) and the [Metadata](#metadata),
2. The ASA Creator Address creates an ASA as follows:
* The *Asset URL* (`au`) field is set to `algorand:///app/?box=#arc3`,
* If the Asset Metadata is [*immutable*](#metadata-immutability), the *Asset Metadata Hash* (`am`) field is set according to the [*ARC-3 ASA Parameters Conventions*](/arc-standards/arc-0003#asa-parameters-conventions),
* The ASA Manager Address is *not* set to the Zero Address.
3. The ASA Manager Address creates the Asset Metadata on the ASA Metadata Registry, using the defined Metadata Flags and Metadata.
If the ASA configuration (Role-Based Access Control and destroyability) needs to be locked (by disabling the ASA Manager Address), the Asset Metadata **MUST** be created first.
> The compliance fragment for [ARC-3](/arc-standards/arc-0003) **MUST NOT** contain additional elements (i.e., `#arc3+89` is not allowed), see [ARC-90 compliance fragment](/arc-standards/arc-0090#compliance-fragment) section for details.
### Asset Metadata URI
[Section titled “Asset Metadata URI”](#asset-metadata-uri)
To get the [ARC-90](/arc-standards/arc-0090) *Asset Metadata URI*, clients **SHALL** complete the *Asset URL* with the `boxparam` filled with the Asset Metadata Box Name, equal to the *Asset ID* (big-endian `uint64` encoded as `base64url`, URL-safe with padding):
`algorand:///app/?box=#arc++...`
where ``, ``, ``, etc. are the ARC numbers of the compliance fragments as defined by [ARC-90](/arc-standards/arc-0090#compliance-fragment).
If the [ARC-3](/arc-standards/arc-0003) compliance fragment is used, it **MUST** be the only fragment as defined by [ARC-90](/arc-standards/arc-0090#compliance-fragment) (i.e., `#arc3` **is valid**, `#arc3+89` **is not valid**).
> The **MainNet** `netauth` is empty, therefore:
>
> * the **Asset URL** is: `algorand://app/?box=#arc++...`
>
> * the **Asset Metadata URI** is: `algorand://app/?box=#arc++...`
> The **TestNet** deployments uses `testnet` as `netlabel` for the `netauth` selector, therefore:
>
> * the **Asset URL** is: `algorand://net:testnet/app/?box=#arc++...`
>
> * the **Asset Metadata URI** is: `algorand://net:testnet/app/?box=#arc++...`
Clients **MUST** encode the Asset Metadata Box Name with URL-safe `base64url` (with padding) in [ARC-90](/arc-standards/arc-0090) URIs, and with Standard `base64` when calling Algod API endpoints with `/box?name=` query parameter.
> For further details on the `base64` Standard and URL-safe encodings refer to the [RFC 4648 sections 4 and 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-4).
> The *Asset ID* (`uint64`) used as Asset Metadata Box Name (`boxparam`) in the Asset Metadata URI is encoded as `base64url` for two reasons: (1) the Box Name is assumed to be raw big-endian 8-bytes encoding a `uint64` and (2) the Algod API requires `/box?name=` query parameter to be Standard `base64` encoded, while the URI requires the URL-safe `base64url` encoding.
#### Examples
[Section titled “Examples”](#examples)
> | Asset ID (`uint64`) | 8-byte big-endian (hex) | Algod `/box?name=` (Standard `base64`) | ARC-90 `box=` (URL-safe `base64url`) |
> | ------------------: | :---------------------: | :------------------------------------: | :----------------------------------: |
> | `0` | `0000000000000000` | `AAAAAAAAAAA=` | `AAAAAAAAAAA=` |
> | `1` | `0000000000000001` | `AAAAAAAAAAE=` | `AAAAAAAAAAE=` |
> | `2^32` | `0000000100000000` | `AAAAAQAAAAA=` | `AAAAAQAAAAA=` |
> | `2^63−1` | `7fffffffffffffff` | `f/////////8=` | `f_________8=` |
>
> The *Asset Metadata URI* for the ASA `12345` would be:
>
> `algorand:///app/?box=AAAAAAAAMDk#arc89`
>
> * **MainNet**: `algorand://app/?box=AAAAAAAAMDk#arc89`
> * **TestNet**: `algorand://net:testnet/app/?box=AAAAAAAAMDk#arc89`
>
> The *Asset Metadata URI* for the [ARC-3](/arc-standards/arc-0003) ASA `12345` would be:
>
> `algorand:///app/?box=AAAAAAAAMDk#arc3`
>
> * **MainNet**: `algorand://app/?box=AAAAAAAAMDk#arc3`
> * **TestNet**: `algorand://net:testnet/app/?box=AAAAAAAAMDk#arc3`
### Deprecation and ASA migration
[Section titled “Deprecation and ASA migration”](#deprecation-and-asa-migration)
The ASA Metadata Registry singleton application is *immutable*.
Any eventual future version **MUST** be deployed as a new Application ID.
The decision to migrate existing ASA Metadata to a new version **MUST** be made by the ASA Manager Address, by declaring the new Application ID in the Deprecated By field of the Metadata Header.
If the Deprecated By field is not `0`:
* The ASA Manager **SHOULD** leave the [Metadata](#metadata) (body) **empty** (i.e., `metadata_size = 0`),
* Clients **SHALL** point to the new [ARC-90](/arc-standards/arc-0090) Asset Metadata URI:
`algorand:///app/?box=#arc++...`
and complete it as specified in the [Asset Metadata URI section](#asset-metadata-uri).
## Rationale
[Section titled “Rationale”](#rationale)
This ARC standardizes an on-chain, Algod/AVM-addressable metadata source for Algorand Standard Assets (ASAs).
The design goals are:
1. Direct retrieval without Indexer or external storage for small but important metadata,
2. Predictable costs and limits via a single-box layout and strict pagination caps,
3. Interoperability with the existing ecosystem through conditional ARCs hooks,
4. Forward compatibility with future ARCs standards, and
5. Precise deprecation strategy for new ASA Metadata Registry versions.
### ASA Metadata Registry Application + ARC-90 URI discovery
[Section titled “ASA Metadata Registry Application + ARC-90 URI discovery”](#asa-metadata-registry-application--arc-90-uri-discovery)
By fixing a *singleton* application per Algorand network and using a partial [ARC-90](/arc-standards/arc-0090) URI in the Asset URL (`au`) field, any client can deterministically compute the query parameter pointing to the Asset Metadata (`/box?name=` as big-endian *Asset ID*) and retrieve the metadata through (a) Algod REST API (`GetApplicationBoxByName`) or (b) direct AVM calls to the ASA Metadata Registry. The standard supports two different entrypoints for the Metadata discovery and retrieval: the *Asset ID* (available on the Algorand ledger) or the *Asset Metadata URI* (which could be distributed on the Web or by other external channels).
> Refer to the [Usage section](#usage) for details.
### Metadata Header/Body split
[Section titled “Metadata Header/Body split”](#metadata-headerbody-split)
A compact header (Identifiers, Flags, Hash, Last-Modified Round, Deprecated By) precedes the body (JSON). The Last-Modified Round provides a monotonic version marker so readers can detect mid-stream changes. The Deprecated By field allows the Asset Managers to migrate existing ASA Metadata to a new future version of the ASA Metadata Registry.
### Identifiers vs Flags
[Section titled “Identifiers vs Flags”](#identifiers-vs-flags)
The ASA Metadata Registry sets Identifiers (short-metadata hint) while the ASA Manager Address governs Flags (ARC-3, ARC-20, ARC-62, ARC-89, immutability). One-way transitions (e.g., immutability) are enforced on-chain. This mirrors ASA trust roles and prevents metadata rewrites after lock.
### Pagination with hard bounds
[Section titled “Pagination with hard bounds”](#pagination-with-hard-bounds)
Metadata pagination is provided for the AVM clients (Algod clients can read entire Metadata in a single request). A fixed `PAGE_SIZE` keeps each response within AVM limits. The registry guarantees `len(page) ≤ PAGE_SIZE` and supplies a `has_next` boolean. AVM clients can read paginated Metadata either *atomically* (**RECOMMENDED**), using Group Transactions of Inner Transactions, or with *sequential* Application Calls. If the *sequential* read is used, the Last-Modified Round supports streaming and parallel fetch with drift detection. A separate pagination head exposes total metadata size, page size, and total pages for preallocation and progress UIs.
### Hash-lock for immutable Metadata
[Section titled “Hash-lock for immutable Metadata”](#hash-lock-for-immutable-metadata)
When the ASA is declared *immutable* at creation, the Asset Metadata Hash (`am`) field can commit to the on-chain bytes (domain-separated SHA-512/256 over Flags and Metadata). This binds the ledger state to a wallet-verifiable hash without requiring JSON normalization.
### Scope and limits
[Section titled “Scope and limits”](#scope-and-limits)
The registry intentionally caps data to a single box (\~32 KiB minus header). Large artifacts (images, media) remain off-chain; their URIs (e.g., `ipfs://...`, `https://...`) live in the JSON. This strikes a balance between availability and ledger hygiene, discouraging chain-as-a-drive patterns.
### Operability
[Section titled “Operability”](#operability)
Metadata deletion returns excess MBR; third-party cleanup of metadata for destroyed ASAs is permitted to prevent abandoned state. Network-specific singleton IDs are published by the ARC.
### AVM Operations
[Section titled “AVM Operations”](#avm-operations)
The registry turns ASA metadata into on-chain first class citizens, using the full potential of AVM opcodes ([`json_ref`](https://specs.algorand.co/avm/avm-appendix-a#json_ref) and [`base64_decode`](https://specs.algorand.co/avm/avm-appendix-a#base64_decode)). ASA metadata on the registry can be read and written programatically on-chain, making them part of the AVM runtime (e.g., an Application can decide to pay a different amount based on some ASA metadata property).
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
Backwards compatibility for existing ASA is possible, as long as the size of their metadata does not exceed `MAX_METADATA_SIZE`. Existing ASAs **SHOULD NOT** be flagged as an [ARC-89 native ASA](#irreversible-flags).
The ASA Metadata Registry can be used by existing ASA as a fallback option in addition to the existing URIs requiring external infrastructures (e.g., Indexer, IPFS, etc.).
Since the Asset URL (`au`) field is immutable, the Asset Metadata cannot be discovered though an ASA look-up.
Existing ASAs willing to backport metadata to the ASA Metadata Registry **MUST** publish the [Asset Metadata URI](#asset-metadata-uri) as [ARC-2](/arc-standards/arc-0002) message, as follows:
* The `` **MUST** be equal to `89`;
* The **RECOMMENDED** `` are [MsgPack](https://msgpack.org/) (`m`) or [JSON](https://www.json.org/json-en.html) (`j`);
* The `` **MUST** specify `uri` key value equal to the [Asset Metadata URI](#asset-metadata-uri).
> **WARNING**: To preserve the existing ASA RBAC (e.g. Manager Address, Freeze Address, etc.) it is necessary to **include all the existing role addresses** in the `AssetConfig`. Not doing so would irreversibly disable the RBAC roles!
Clients discover the backport [ARC-2](/arc-standards/arc-0002) message inspecting the ASA `AssetConfig` transaction history.
Clients **SHOULD** optimistically check ASA metadata existence on the ASA Metadata Registry first, to avoid inspecting the transaction history.
### Backporting Message Example - JSON without a version
[Section titled “Backporting Message Example - JSON without a version”](#backporting-message-example---json-without-a-version)
> The [ARC-2](/arc-standards/arc-0002) message to backport existing [ARC-3](/arc-standards/arc-0003) ASA `12345` metadata to the ASA Metadata Registry would be:
>
> ```text
> arc89:j{"uri": "algorand:///app/?box=AAAAAAAAMDk#arc3"}
> ```
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
### [ARC-4](/arc-standards/arc-0004) Interface
[Section titled “ARC-4 Interface”](#arc-4-interface)
```json
{
"name": "ASA Metadata Registry",
"desc": "Singleton Application providing ASA metadata via Algod API and AVM",
"methods": [
{
"name": "arc89_create_metadata",
"desc": "Create Asset Metadata for an existing ASA, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to create the Asset Metadata for" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags. WARNING: LSB and 1 can by set only at creation time. If the MSB is True the Asset Metadata is IMMUTABLE" },
{ "type": "uint16", "name": "metadata_size", "desc": "The Metadata byte size to be created" },
{ "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" },
{ "type": "pay", "name": "mbr_delta_payment", "desc": "Payment of the MBR Delta amount (microALGO) for the Asset Metadata Box creation" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" }
},
{
"name": "arc89_replace_metadata",
"desc": "Replace mutable Metadata with smaller or equal size payload for an existing ASA, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata for" },
{ "type": "uint16", "name": "metadata_size", "desc": "The new Asset Metadata byte size" },
{ "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" }
},
{
"name": "arc89_replace_metadata_larger",
"desc": "Replace mutable Metadata with larger size payload for an existing ASA, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata for" },
{ "type": "uint16", "name": "metadata_size", "desc": "The new Metadata byte size" },
{ "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" },
{ "type": "pay", "name": "mbr_delta_payment", "desc": "Payment of the MBR Delta amount (microALGO) for the larger Asset Metadata Box replace" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" }
},
{
"name": "arc89_replace_metadata_slice",
"desc": "Replace a slice of the Asset Metadata for an ASA with a payload of the same size, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata slice for" },
{ "type": "uint16", "name": "offset", "desc": "The 0-based byte offset within the Metadata (body) bytes" },
{ "type": "byte[]", "name": "payload", "desc": "The slice payload" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "void" }
},
{
"name": "arc89_migrate_metadata",
"desc": "Migrate the Asset Metadata for an ASA to a new ASA Metadata Registry version, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to migrate the Asset Metadata for" },
{ "type": "uint64", "name": "new_registry_id", "desc": "The Application ID of the new ASA Metadata Registry version" }
],
"events": [
{
"name": "Arc89MetadataMigrated",
"desc": "Event emitted when Asset Metadata has been migrated to a new ASA Metadata Registry version",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "new_registry_id", "desc": "The Application ID of the new ASA Metadata Registry version" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata migration" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata migration" }
]
}
],
"returns": { "type": "void" }
},
{
"name": "arc89_delete_metadata",
"desc": "Delete Asset Metadata for an ASA, restricted to the ASA Manager Address (if the ASA still exists)",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to delete the Asset Metadata for" }
],
"events": [
{
"name": "Arc89MetadataDeleted",
"desc": "Event emitted when Asset Metadata is deleted",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the deleted Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata delete" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata deletion" }
]
}
],
"returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" }
},
{
"name": "arc89_extra_payload",
"desc": "Concatenate extra payload to Asset Metadata head call methods (creation or replacement)",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to provide Metadata extra payload for" },
{ "type": "byte[]", "name": "payload", "desc": "The Metadata extra payload to concatenate" }
],
"returns": { "type": "void" }
},
{
"name": "arc89_set_reversible_flag",
"desc": "Set a reversible Asset Metadata Flag, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set the Metadata Flag for" },
{ "type": "uint8", "name": "flag", "desc": "The reversible flag index to set" },
{ "type": "bool", "name": "value", "desc": "The flag value to set" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "void" }
},
{
"name": "arc89_set_irreversible_flag",
"desc": "Set an irreversible Asset Metadata Flag, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set the Metadata Flag for" },
{ "type": "uint8", "name": "flag", "desc": "The irreversible flag index to set. WARNING: must be in 2 ... 6" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "void" }
},
{
"name": "arc89_set_immutable",
"desc": "Set Asset Metadata as immutable, restricted to the ASA Manager Address",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set immutable Asset Metadata for" }
],
"events": [
{
"name": "Arc89MetadataUpdated",
"desc": "Event emitted when Asset Metadata is created or updated",
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" },
{ "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" },
{ "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" },
{ "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" },
{ "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" },
{ "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" },
{ "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" }
]
}
],
"returns": { "type": "void" }
},
{
"name": "arc89_get_metadata_registry_parameters",
"desc": "Return the ASA Metadata Registry parameters",
"readonly": true,
"args": [],
"returns": { "type": "(uint8,uint16,uint16,uint16,uint16,uint16,uint16,uint16,uint64,uint64)", "desc": "Tuple of (ASSET_METADATA_BOX_KEY_SIZE, HEADER_SIZE, MAX_METADATA_SIZE, SHORT_METADATA_SIZE, PAGE_SIZE, FIRST_PAYLOAD_MAX_SIZE, EXTRA_PAYLOAD_MAX_SIZE, REPLACE_PAYLOAD_MAX_SIZE, FLAT_MBR, BYTE_MBR)" }
},
{
"name": "arc89_get_metadata_partial_uri",
"desc": "Return the Asset Metadata ARC-90 partial URI, without compliance fragment (optional)",
"readonly": true,
"args": [],
"returns": { "type": "string", "desc": "Asset Metadata ARC-90 partial URI, without compliance fragment" }
},
{
"name": "arc89_get_metadata_mbr_delta",
"desc": "Return the Asset Metadata Box MBR Delta for an ASA, given a new Asset Metadata byte size. If the Asset Metadata Box does not exist, the creation MBR Delta is returned.",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to calculate the Asset Metadata MBR Delta for" },
{ "type": "uint16", "name": "new_metadata_size", "desc": "The new Asset Metadata byte size" }
],
"returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" }
},
{
"name": "arc89_check_metadata_exists",
"desc": "Checks whether the specified ASA exists and whether its associated Asset Metadata is available",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the ASA and Asset Metadata existence for" }
],
"returns": { "type": "(bool,bool)", "desc": "Tuple of (ASA exists, Asset Metadata exists)" }
},
{
"name": "arc89_is_metadata_immutable",
"desc": "Return True if the Asset Metadata for an ASA is immutable, False otherwise",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the Asset Metadata immutability for" }
],
"returns": { "type": "bool", "desc": "Asset Metadata for the ASA is immutable" }
},
{
"name": "arc89_is_metadata_short",
"desc": "Return True if Asset Metadata for an ASA is short (up to 4096 bytes), False otherwise",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the Asset Metadata size classification for" }
],
"returns": { "type": "(bool,uint64)", "desc": "Tuple of (Is Short Metadata, Metadata Last Modified Round)" }
},
{
"name": "arc89_get_metadata_header",
"desc": "Return the Asset Metadata Header for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata Header for" }
],
"returns": { "type": "(byte,byte,byte,byte[32],uint64,uint64)", "desc": "Asset Metadata Header (Identifiers, Reversible Flags, Irreversible Flags, Hash, Last Modified Round, Deprecated By)" }
},
{
"name": "arc89_get_metadata_pagination",
"desc": "Return the Asset Metadata pagination for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata pagination for" }
],
"returns": { "type": "(uint16,uint16,uint8)", "desc": "Tuple of (total metadata byte size, PAGE_SIZE, total number of pages)" }
},
{
"name": "arc89_get_metadata",
"desc": "Return paginated Asset Metadata (without Header) for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata for" },
{ "type": "uint8", "name": "page", "desc": "The 0-based Metadata page number" }
],
"returns": { "type": "(bool,uint64,byte[])", "desc": "Tuple of (has next page, Metadata Last Modified Round, page content)" }
},
{
"name": "arc89_get_metadata_slice",
"desc": "Return a slice of the Asset Metadata for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata slice for" },
{ "type": "uint16", "name": "offset", "desc": "The 0-based byte offset within the Metadata (body) bytes" },
{ "type": "uint16", "name": "size", "desc": "The slice bytes size to return" }
],
"returns": { "type": "byte[]", "desc": "Asset Metadata slice (size limited to PAGE_SIZE)" }
},
{
"name": "arc89_get_metadata_header_hash",
"desc": "Return the Metadata Header Hash for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Metadata Header Hash for" }
],
"returns": { "type": "byte[32]", "desc": "Asset Metadata Header Hash" }
},
{
"name": "arc89_get_metadata_page_hash",
"desc": "Return the SHA512-256 of a Metadata page for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata page hash for" },
{ "type": "uint8", "name": "page", "desc": "The 0-based Metadata page number" }
],
"returns": { "type": "byte[32]", "desc": "The SHA512-256 of the Metadata page" }
},
{
"name": "arc89_get_metadata_hash",
"desc": "Return the Metadata Hash for an ASA",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Metadata Hash for" }
],
"returns": { "type": "byte[32]", "desc": "Asset Metadata Hash" }
},
{
"name": "arc89_get_metadata_string_by_key",
"desc": "Return the UTF‑8 string value for a top‑level JSON key of type JSON String from short Metadata for an ASA; errors if the key does not exist or is not a JSON String",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" },
{ "type": "string", "name": "key", "desc": "The top‑level JSON key whose string value to fetch" }
],
"returns": { "type": "string", "desc": "The string value from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" }
},
{
"name": "arc89_get_metadata_uint64_by_key",
"desc": "Return the uint64 value for a top‑level JSON key of type JSON Uint64 from short Metadata for an ASA; errors if the key does not exist or is not a JSON Uint64",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" },
{ "type": "string", "name": "key", "desc": "The top‑level JSON key whose uint64 value to fetch" }
],
"returns": { "type": "uint64", "desc": "The uint64 value from valid UTF‑8 JSON Metadata" }
},
{
"name": "arc89_get_metadata_object_by_key",
"desc": "Return the UTF-8 object value for a top‑level JSON key of type JSON Object from short Metadata for an ASA; errors if the key does not exist or is not a JSON Object",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" },
{ "type": "string", "name": "key", "desc": "The top‑level JSON key whose object value to fetch" }
],
"returns": { "type": "string", "desc": "The object value from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" }
},
{
"name": "arc89_get_metadata_b64_bytes_by_key",
"desc": "Return the base64-decoded bytes for a top-level JSON key of type JSON String from short Metadata for an ASA; errors if the key does not exist, is not a JSON String, or is not valid base64 for the chosen encoding",
"readonly": true,
"args": [
{ "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" },
{ "type": "string", "name": "key", "desc": "The top-level JSON key whose base64 string value to fetch and decode" },
{ "type": "uint8", "name": "b64_encoding", "desc": "base64 encoding enum: 0 = URLEncoding, 1 = StdEncoding" }
],
"returns": { "type": "byte[]", "desc": "The base64-decoded bytes from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" }
}
]
}
```
The ASA Metadata Registry **MUST** validate [ARC-4](/arc-standards/arc-0004) method arguments size according to their types.
> Refer to the [AppSpec section](#arc-56-appspec) for the detailed [ARC-56](/arc-standards/arc-0056) Application Specification of the singleton reference implementation.
##### Create Metadata
[Section titled “Create Metadata”](#create-metadata)
To create the Asset Metadata:
* The ASA **MUST** *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST NOT** *exist*, and
If the provided `metadata_size > MAX_METADATA_SIZE` the creation **MUST** be rejected.
If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the [Short Metadata Identifier](#short-metadata) **MUST** be set to `True`.
The [Metadata](#metadata) **MUST** be initialized with the provided `payload` value (empty is allowed).
If the creation is part of a Group, the [extra payload](#extra-payload) provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order.
The creation **MUST** be rejected as soon as the cumulative staged size for the same `asset_id` in the same Group exceeds `metadata_size`.
The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the creation is rejected.
The [Reversible Flags](#reversible-flags) **MUST** be initialized with the provided `reversible_flags` value (`byte`).
The [Irreversible Flags](#irreversible-flags) **MUST** be initialized with the provided `irreversible_flags` value (`byte`).
The [Metadata Hash](#metadata-hash) **MUST** be initialized according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be initialized to the current round.
The [Deprecated By](#deprecated-by) field **MUST** be initialized to `0`.
If the ASA is declared as [ARC-3](/arc-standards/arc-0003) compliant, the *Asset Name* (`an`) or the *Asset URL* (`au`) **MUST** comply with the [*ARC-3 ASA Parameters Conventions*](/arc-standards/arc-0003#asa-parameters-conventions).
If the ASA is declared as [ARC-89 Native ASA](#arc-89-native-asa-creation), the *Asset URL* (`au`) **MUST** comply with the specified [Asset Metadata URI](#asset-metadata-uri) (no `#arc` fragment validation enforced).
If the ASA is declared as [ARC-54 Burnable ASA](#irreversible-flags), the ASA **MUST NOT** have a Clawback Address.
The [MBR Delta](#mbr-delta) *amount* of the created Asset Metadata Box **MUST** be provided contextually to the ASA Metadata Registry Address.
An `Arc89MetadataUpdated` event **MUST** be emitted.
> ⚠️ WARNING: If the MSB of the Irreversible Flags is `True` the Asset Metadata is *immutable*, for further details refer to the [Irreversible Flags section](#irreversible-flags).
##### Replace Metadata
[Section titled “Replace Metadata”](#replace-metadata)
To replace the Asset Metadata for an ASA with smaller or equal size Metadata:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*.
If the provided `metadata_size > MAX_METADATA_SIZE` the update **MUST** be rejected.
If the provided `metadata_size > existing_metadata_size` the update **MUST** be rejected.
If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the [Short Metadata Identifier](#short-metadata) **MUST** be set to `True`.
The [Metadata](#metadata) **MUST** be replaced with the provided `payload` value (empty is allowed).
If the replacement is part of a Group, the [extra payload](#extra-payload) provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order.
The replacement **MUST** be rejected as soon as the cumulative staged payload for the same `asset_id` in the same Group exceeds `metadata_size`.
The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the replacement is rejected.
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
The [MBR Delta](#mbr-delta) *amount* of the updated Asset Metadata Box **MUST** be managed contextually:
* If *sign* is `NULL`, no MBR management is required;
* If *sign* is `NEG`, the excess of MBR amount **MUST** be returned from the ASA Metadata Registry Address to the ASA Manager Address.
An `Arc89MetadataUpdated` event **MUST** be emitted.
> MBR is returned with an Inner Transaction whose fee is externally provided.
##### Replace Metadata Larger
[Section titled “Replace Metadata Larger”](#replace-metadata-larger)
To replace the Asset Metadata for an ASA with larger size Metadata:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*.
If the provided `metadata_size > MAX_METADATA_SIZE` the update **MUST** be rejected.
If the provided `metadata_size ≤ existing_metadata_size` the update **MUST** be rejected.
If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the [Short Metadata Identifier](#short-metadata) **MUST** be set to `True`.
The [Metadata](#metadata) **MUST** be replaced with the provided `payload` value (empty is allowed).
If the creation is part of a Group, the [extra payload](#extra-payload) provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order.
The replacement **MUST** be rejected as soon as the cumulative staged payload for the same `asset_id` in the same Group exceeds `metadata_size`.
The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the replacement is rejected.
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
The [MBR Delta](#mbr-delta) *amount* of the updated Asset Metadata Box **MUST** be provided contextually to the ASA Metadata Registry Address.
An `Arc89MetadataUpdated` event **MUST** be emitted.
##### Replace Metadata Slice
[Section titled “Replace Metadata Slice”](#replace-metadata-slice)
To replace the Metadata slice for an ASA:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*, and
* The byte range specified by `offset` (`uint16`) and `payload` length **MUST NOT** exceed the `metadata_size`.
The Metadata slice **MUST** be replaced with the provided `payload` value.
The Metadata slice replacement **MUST** preserve the `metadata_size`.
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
An `Arc89MetadataUpdated` event **MUST** be emitted.
> A group transaction can be used to replace a large Metadata slice atomically.
##### Migrate Metadata
[Section titled “Migrate Metadata”](#migrate-metadata)
To migrate the Asset Metadata for an ASA to a new ASA Metadata Registry version:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*, and
* The `new_registry_id` (`uint64`) **MUST** be different from the current ASA Metadata Registry Application ID (`uint64`).
The [Deprecated By](#deprecated-by) field **MUST** be set to the `new_registry_id` (`uint64`) value.
An `Arc89MetadataMigrated` event **MUST** be emitted.
> The migration can be performed more than once and reverted.
> ⚠️ The Deprecated By field is not included in the Metadata Hash computation, and does not affect the Last Modified Round.
##### Delete Metadata
[Section titled “Delete Metadata”](#delete-metadata)
To delete the Asset Metadata for an ASA:
* The Asset Metadata Box **MUST** *exist*, and
* If the ASA still *exists*:
* The Asset Metadata **MUST NOT** be *immutable*, and
* The authorization **MUST** be restricted to the ASA Manager Address.
> ⚠️ WARNING: Not even the ASA Manager Address can delete the *immutable* Asset Metadata of an *existing* ASA, while anyone can delete Asset Metadata if the ASA has been *destroyed*, regardless of being *immutable* or not.
The Asset Metadata Box **MUST** be deleted.
The [MBR Delta](#mbr-delta) *amount* of the deleted Asset Metadata Box **MUST** be managed contextually:
* If the ASA *exists*, it **MUST** be returned to the ASA Manager Address, otherwise
* It **MUST** be returned to the caller.
An `Arc89MetadataDeleted` event **MUST** be emitted.
> MBR is returned with an Inner Transaction whose fee is externally provided.
> ⚠️ The ASA Metadata Registry is not aware of the ASA destruction events, therefore it cannot guarantee a grace period in favor of the ASA Manager Address. ASA Manager Address **SHOULD** group the ASA destruction and Asset Metadata deletion transactions in the same Group to avoid any race condition.
##### Extra Payload
[Section titled “Extra Payload”](#extra-payload)
To provide an extra payload to append to Asset Metadata creation or replace for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address.
The extra payload calls **MUST** appear *after* the corresponding header call (create or replace) for that same `asset_id` in the same Group (top-level or inner). Concatenation order is transaction-index order.
All extra payload calls for a given `asset_id` **MUST** be top-level if the header call is top-level, or inner if the header is inner.
> The Asset Metadata Box already exists since the extra payload call is always preceded by a header call (create or replace).
> The header call (create or replace) checks that the extra payload call is keyed to the same Asset ID to manage interleaving and idempotence on the *same* Group. Interleaving on different Group levels (top-level / inner) are **not supported**.
>
> **Example:** Creating and updating different Assets Metadata in the same Group
>
> ```plain
> [Tx1: Create Payload A, Extra Payload A1, Update Payload B, Extra Payload A2, Extra Payload B1]
> ```
>
> Would result in the following Asset Metadata Boxes:
>
> * Asset ID A: `[Header A, Create Payload A || Extra Payload A1 || Extra Payload A2]`
> * Asset ID B: `[Header B, Update Payload B || Extra Payload B1]`
##### Set Reversible Flag
[Section titled “Set Reversible Flag”](#set-reversible-flag)
To set a *reversible* Asset Metadata Flag for an ASA:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*, and
* The *reversible* `flag` (`uint8`) **MUST** be in `0 ... 7`.
The reversible `flag` **MUST** be set to the provided `value` (`bool`).
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
An `Arc89MetadataUpdated` event **MUST** be emitted if not idempotent.
##### Set Irreversible Flag
[Section titled “Set Irreversible Flag”](#set-irreversible-flag)
To set an *irreversible* Asset Metadata Flag for an ASA:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*, and
* The *irreversible* `flag` (`uint8`) **MUST** be in `2 ... 6`.
The irreversible `flag` **MUST** be set to `True` (idempotent).
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
If the ASA is declared as [ARC-54 Burnable ASA](#irreversible-flags), the ASA **MUST NOT** have a Clawback Address.
An `Arc89MetadataUpdated` event **MUST** be emitted if not idempotent.
> ⚠️ WARNING: flags 0, 1 are set only at creation time, for further details refer to the [Irreversible Flags section](#irreversible-flags).
##### Set Immutable
[Section titled “Set Immutable”](#set-immutable)
To set the Asset Metadata as *immutable*:
* The ASA **MUST** still *exist*, and
* The authorization **MUST** be restricted to the ASA Manager Address, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST NOT** be *immutable*.
The Asset Metadata *immutability* flag in the [Irreversible Flags](#irreversible-flags) **MUST** be set to `True`.
The [Metadata Hash](#metadata-hash) **MUST** be updated according to the [Metadata Hash Computation](#metadata-hash-computation).
The [Last Modified Round](#last-modified-round) **MUST** be updated to the current round.
An `Arc89MetadataUpdated` event **MUST** be emitted.
> ⚠️ WARNING: Asset Metadata immutability cannot be revoked once set.
##### Get Metadata Registry Parameters
[Section titled “Get Metadata Registry Parameters”](#get-metadata-registry-parameters)
The method **MUST** return the ASA Metadata Registry parameters as a tuple:
* The firs value is the `ASSET_METADATA_BOX_KEY_SIZE` (`uint8`),
* The second value is the `HEADER_SIZE` (`uint16`),\`
* The third value is the `MAX_METADATA_SIZE` (`uint16`),\`
* The fourth value is the `SHORT_METADATA_SIZE` (`uint16`),\`
* The fifth value is the `PAGE_SIZE` (`uint16`),
* The sixth value is the `FIRST_PAYLOAD_MAX_SIZE` (`uint16`),
* The seventh value is the `EXTRA_PAYLOAD_MAX_SIZE` (`uint16`),
* The eighth value is the `REPLACE_PAYLOAD_MAX_SIZE` (`uint16`),
* The nineth value is the `FLAT_MBR` (`uint64`),
* The tenth value is the `BYTE_MBR` (`uint64`).
Clients **SHOULD** use these parameter values and avoid locally computed constants.
##### Get Metadata Partial URI
[Section titled “Get Metadata Partial URI”](#get-metadata-partial-uri)
The method **MUST** return the Asset Metadata Partial URI (`string`) without the optional `#arc` compliance fragment:
`algorand:///app/?box=`
Clients **SHOULD** use this value as [Asset URL](#asset-url) and avoid locally computed constants.
##### Get Metadata MBR Delta
[Section titled “Get Metadata MBR Delta”](#get-metadata-mbr-delta)
To get the [MBR Delta](#mbr-delta) for an ASA:
The `new_metadata_size` (`uint16`) **MUST** be less than or equal to `MAX_METADATA_SIZE`.
* If the Asset Metadata Box *exists*, `flat_mbr = 0` and then:
* If the `new_metadata_size == metadata_size`, then:
* The returned *sign* **MUST** be `NULL`, and
* `delta_size = 0`.
* If the `new_metadata_size > metadata_size`, then:
* The returned *sign* **MUST** be `POS`, and
* `delta_size = new_metadata_size - metadata_size`.
* If the `new_metadata_size < metadata_size`, then:
* The returned *sign* **MUST** be `NEG`, and
* `delta_size = metadata_size - new_metadata_size`.
* If the Asset Metadata Box *does not exist*, `flat_mbr = FLAT_MBR` and then:
* The returned *sign* **MUST** be `POS`, and
* `delta_size = ASSET_METADATA_BOX_KEY_SIZE + HEADER_SIZE + new_metadata_size`.
The returned *amount* **MUST** be `flat_mbr + BYTE_MBR * delta_size`.
> The *static* MBR Delta calculation provided to the clients is based on:
>
> * `FLAT_MBR` (`uint64`), a parameter of the ASA Metadata Registry (microALGO) equal to AVM MBR for Box creation;
>
> * `BYTE_MBR` (`uint64`), a parameter of the ASA Metadata Registry (microALGO) equal to AVM MBR for byte used by the Box.
> The *dynamic* (**RECOMMENDED**) MBR Delta calculation is provided to the clients by simulating the create, update, or delete methods.
##### Check Metadata Exists
[Section titled “Check Metadata Exists”](#check-metadata-exists)
The method **MUST** return a pair of booleans (`(bool,bool)`):
* The first value is `True` if the ASA *still exists*, `False` otherwise;
* The second value is `True` if the Asset Metadata for the ASA *exists*, `False` otherwise.
##### Is Metadata Immutable
[Section titled “Is Metadata Immutable”](#is-metadata-immutable)
To check if the Asset Metadata is [*immutable*](#metadata-immutability):
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The method **MUST** return `True` if the Asset Metadata for an ASA is *immutable* or the ASA Manager Address is set to the Zero Address, `False` otherwise.
##### Is Metadata Short
[Section titled “Is Metadata Short”](#is-metadata-short)
To check if the Asset Metadata is [*short*](#short-metadata):
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The method **MUST** return the value of the [Short Metadata](#short-metadata) identifier and the Last Modified Round (`uint64`).
##### Get Metadata Header
[Section titled “Get Metadata Header”](#get-metadata-header)
To get the Asset Metadata Header for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The Metadata Header **MUST** be returned as a tuple `(byte,byte,byte,byte[32],uint64,uint64)`, where:
* The first value (`byte`) is the [Metadata Identifiers](#metadata-identifiers),
* The second value (`byte`) is the [Reversible Flags](#reversible-flags),
* The third value (`byte`) is the [Irreversible Flags](#irreversible-flags),
* The fourth value (`byte[32]`) is the [Metadata Hash](#metadata-hash),
* The fifth value (`uint64`) is the [Last Modified Round](#last-modified-round),
* The sixth value (`uint64`) is the [Deprecated By](#deprecated-by) field.
##### Get Metadata Pagination
[Section titled “Get Metadata Pagination”](#get-metadata-pagination)
To get the Asset Metadata pagination for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The pagination **MUST** be returned as a tuple `(uint16,uint16,uint8)`, where:
* The first value (`uint16`) is the Metadata *total length* (`metadata_size`, in bytes),
* The second value (`uint16`) is the `PAGE_SIZE` (in bytes, as defined in the [Metadata Pagination section](#pagination)),
* The third value (`uint8`) is the total number of Metadata pages (`total_pages`).
##### Get Metadata
[Section titled “Get Metadata”](#get-metadata)
To get the Asset Metadata for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*,
Let `total_pages` be as returned by [Get Metadata Pagination](#get-metadata-pagination).
* If `total_pages > 0`, the provided 0-indexed `page` (`uint8`) **MUST** satisfy `page < total_pages`.
* If `total_pages == 0`, the provided `page` **MUST** be `0`.
The paginated Asset Metadata **MUST** be returned as a tuple `(bool,uint64,byte[])`, where:
* The first value (`bool`) is a flag indicating if the Metadata *has next page*,
* The second value (`uint64`) is the [Last Modified Round](#last-modified-round) of the Metadata,
* The third value (`byte[]`) is the content of Metadata page with length equal to `content_size` bytes. If `total_pages == 0` (i.e., `metadata_size == 0`), the implementation **MUST** return an empty `byte[]`. The empty value does **NOT** imply the existence of a Metadata Page Hash (see [Get Metadata Page Hash](#get-metadata-page-hash)).
The *has next page* flag **MUST** be `True` if, at the time of serving the request, `(page + 1) * PAGE_SIZE < metadata_size`, and `False` otherwise.
The *content* byte size **MUST NOT** exceed the `PAGE_SIZE`.
The implementation **MUST** ensure that `content_size ≤ PAGE_SIZE` for every response.
For `page`s `p` where `(p+1)*PAGE_SIZE ≤ metadata_size` at serve time, the response the implementation **SHOULD** return `content_size = PAGE_SIZE`. The final page **MUST** return `content_size = metadata_size − PAGE_SIZE*(total_pages−1)`.
> This invariant guarantees the read operation remains within protocol return-size limits, enables deterministic computation of total pages and *has next page*, and allows client implementations to safely preallocate buffers and parallelize fetches without risk of oversized responses.
It is **RECOMMENDED** to group `total_pages` reading in a single *atomic read* using a Group Transaction or Inner Transactions.
If the `total_pages` reading is *not atomic*, clients **MUST** verify that [Last Modified Round](#last-modified-round) remains constant across pages; if it changes, clients **SHOULD** re-use the `arc89_get_metadata_pagination` method and restart reading from page `0`. Clients **MAY** simulate the *sequential* calls to guarantee atomicity under their own round expectation.
> For further details refer to the [usage section](#usage-mode-2-avm).
##### Get Metadata Slice
[Section titled “Get Metadata Slice”](#get-metadata-slice)
To get a Metadata Slice for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The condition `size ≤ PAGE_SIZE` **MUST** hold, and
* The byte range specified with `offset` (`uint16`) and `size` (`uint16`) **MUST NOT** exceed the `metadata_size`.
The slice extracted from the Metadata **MUST** be returned.
##### Get Metadata Header Hash
[Section titled “Get Metadata Header Hash”](#get-metadata-header-hash)
To get the Metadata Header Hash (`hh`) for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The Metadata Header Hash (`hh`) **MUST** be returned according to the [Metadata Hash Computation](#metadata-hash-computation).
##### Get Metadata Page Hash
[Section titled “Get Metadata Page Hash”](#get-metadata-page-hash)
To get the Metadata Page Hash for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
Let `total_pages` be as returned by [Get Metadata Pagination](#get-metadata-pagination).
* If `total_pages > 0`, the provided 0-indexed `page` (`uint8`) **MUST** satisfy `page < total_pages`.
* If `total_pages == 0`, the method **MUST** fail.
The Metadata Page Hash (`ph[page]`) **MUST** be returned according to the [Metadata Hash Computation](#metadata-hash-computation).
The Metadata Page Hash (`ph[i]`) **MUST** be returned according to the [Metadata Hash Computation](#metadata-hash-computation).
##### Get Metadata Hash
[Section titled “Get Metadata Hash”](#get-metadata-hash)
To get the Metadata Hash for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*.
The Metadata Hash (`am`) **MUST** be returned according to the [Metadata Hash Computation](#metadata-hash-computation).
##### Get Metadata String By Key
[Section titled “Get Metadata String By Key”](#get-metadata-string-by-key)
To get a Metadata JSON String value by top-level key for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST** be [*short*](#short-metadata), and
* The key’s value length **MUST NOT** exceed `PAGE_SIZE`.
The top-level key’s value (JSON String) extracted from the JSON Metadata object **MUST** be returned (as `string`).
> ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value.
> ⚠️ WARNING: The following conditions cause a *runtime error*:
>
> * The Metadata (body) is not a valid UTF-8 encoded JSON object,
> * The top-level key does not exist,
> * The top-level key’s value is not a JSON String.
##### Get Metadata Uint64 By Key
[Section titled “Get Metadata Uint64 By Key”](#get-metadata-uint64-by-key)
To get a Metadata uint64 value by top-level JSON key for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST** be [*short*](#short-metadata).
The top-level key’s value (JSON Uint64) extracted from the JSON Metadata object **MUST** be returned (as `uint64`).
> ⚠️ WARNING: The following conditions cause a *runtime error*:
>
> * The Metadata (body) is not a valid UTF-8 encoded JSON object,
> * The top-level key does not exist,
> * The top-level key’s value is not a JSON Uint64.
##### Get Metadata Object By Key
[Section titled “Get Metadata Object By Key”](#get-metadata-object-by-key)
To get a Metadata object value by top-level JSON key for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST** be [*short*](#short-metadata), and
* The key’s value length **MUST NOT** exceed `PAGE_SIZE`.
The top-level key’s value (JSON Object) extracted from the JSON Metadata object **MUST** be returned (as `string`).
> ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value.
> ⚠️ WARNING: The following conditions cause a *runtime error*:
>
> * The Metadata (body) is not a valid UTF-8 encoded JSON object,
> * The top-level key does not exist,
> * The top-level key’s value is not a JSON Object.
##### Get Metadata b64 Bytes By Key
[Section titled “Get Metadata b64 Bytes By Key”](#get-metadata-b64-bytes-by-key)
To get a Metadata base64-decoded value by top-level JSON key for an ASA:
* The ASA **MUST** still *exist*, and
* The Asset Metadata Box **MUST** *exist*, and
* The Asset Metadata **MUST** be [*short*](#short-metadata), and
* The `b64_encoding` enum (`uint8`) **MUST** be either `0` (`URLEncoding`) or `1` (`StdEncoding`), and
* The key’s base64-decoded value length **MUST NOT** exceed `PAGE_SIZE`.
The top-level key’s value (JSON String) extracted from the JSON Metadata object **MUST** be base64-decoded using the selected `b64_encoding` and returned (as `byte[]`).
> ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value.
> ⚠️ WARNING: The following conditions cause a *runtime error*:
>
> * The Metadata (body) is not a valid UTF-8 encoded JSON object,
> * The top-level key does not exist,
> * The top-level key’s value is not a JSON String,
> * The top-level key’s value is not a valid base64-encoding string for the chosen encoding.
> For further details on the base64 encodings refer to the `base64_decode` [AVM opcode specifications](https://specs.algorand.co/avm/avm-appendix-a#base64_decode).
### [ARC-28](/arc-standards/arc-0028) Events
[Section titled “ARC-28 Events”](#arc-28-events)
| ARC-28 EVENT SIGNATURE | 4-BYTE SELECTOR (HEX) |
| -------------------------------------------------------------------- | --------------------- |
| `Arc89MetadataUpdated(uint64,uint64,uint64,byte,byte,bool,byte[32])` | `8b035084` |
| `Arc89MetadataMigrated(uint64,uint64,uint64,uint64)` | `c87023bf` |
| `Arc89MetadataDeleted(uint64,uint64,uint64)` | `bc3f20d1` |
### [ARC-56](/arc-standards/arc-0056) AppSpec
[Section titled “ARC-56 AppSpec”](#arc-56-appspec)
The ASA Metadata Registry AppSpec is published in the reference implementation [repository](https://github.com/algorandfoundation/arc89/blob/main/smart_contracts/artifacts/asa_metadata_registry/AsaMetadataRegistry.arc56.json).
### Usage
[Section titled “Usage”](#usage)
The ASA Metadata Registry has two modes of operation:
* **Algod API**: the *entire* Asset Metadata is retrieved via a single request to the Algod REST API endpoints (or via SDK wrappers);
* **AVM**: the *paginated* Asset Metadata is retrieved via *grouped* (**RECOMMENDED**) or *sequential* Application Calls (real or simulated) to the ASA Metadata Registry.
#### Usage Mode 1: Algod API
[Section titled “Usage Mode 1: Algod API”](#usage-mode-1-algod-api)
The Algod clients retrieve the Asset Metadata from two entrypoints:
1. The *Asset ID*;
2. The *Asset Metadata URI*.
> A minimal [\[Python SDK](https://github.com/algorandfoundation/arc89/tree/main/src/asa_metadata_registry). is provided with the reference implementation.
##### Example 1: Get [ARC-3](/arc-standards/arc-0003) Metadata from the Asset ID
[Section titled “Example 1: Get ARC-3 Metadata from the Asset ID”](#example-1-get-arc-3-metadata-from-the-asset-id)
Given the *Asset ID* `12345`, the client:
1. Calls the Algod API [GetAssetByID](https://dev.algorand.co/reference/rest-api/algod/#getassetbyid) endpoint to get the *Asset URL* field (`url`) from the response and drops the `#arc3` suffix (if present), obtaining:
`algorand:///app/?box=`;
2. Encodes the *Asset ID* as `base64url` to get the Asset Metadata Box Name (``);
3. Calls the Algod API [GetApplicationBoxByName](https://dev.algorand.co/reference/rest-api/algod/#getapplicationboxbyname) endpoint to get the content of the *Asset Metadata Box* from the response:
```shell
curl -X GET http://localhost/v2/applications//box?name= \
-H 'Accept: application/json' \
-H 'X-Algo-API-Token: API_KEY'
```
The `value` field of the response contains the Asset Metadata Box content as concatenation of the following fields:
* Metadata Header (`byte[HEADER_SIZE]`);
* Metadata Body (`byte[]`): [ARC-3](/arc-standards/arc-0003) JSON Metadata.
> Clients **MUST** strip the Metadata Header (`byte[HEADER_SIZE]`) from the Asset Metadata Box value before parsing the JSON Metadata.
##### Example 2: Get [ARC-3](/arc-standards/arc-0003) Metadata from Asset Metadata URI
[Section titled “Example 2: Get ARC-3 Metadata from Asset Metadata URI”](#example-2-get-arc-3-metadata-from-asset-metadata-uri)
Given the *Asset Metadata URI* `algorand:///app/?box=#arc3`, the client:
1. Calls the Algod API [GetApplicationBoxByName](https://dev.algorand.co/reference/rest-api/algod/#getapplicationboxbyname) endpoint to get the content of the *Asset Metadata Box* from the response:
```shell
curl -X GET http://localhost/v2/applications//box?name= \
-H 'Accept: application/json' \
-H 'X-Algo-API-Token: API_KEY'
```
The `value` field of the response contains the Asset Metadata Box content as concatenation of the following fields:
* Metadata Header (`byte[HEADER_SIZE]`);
* Metadata Body (`byte[]`): [ARC-3](/arc-standards/arc-0003) JSON Metadata.
> Clients **MUST** strip the Metadata Header (`byte[HEADER_SIZE]`) from the Asset Metadata Box value before parsing the JSON Metadata.
#### Usage Mode 2: AVM
[Section titled “Usage Mode 2: AVM”](#usage-mode-2-avm)
The AVM clients issue Application calls (real or simulated) to the ASA Metadata Registry in two ways:
1. (**RECOMMENDED**) Atomically, via grouped (Top-level or Inner) Application Calls;
2. Sequentially, via standalone Application Calls;
##### Example 1: Atomic read with Top-level Group
[Section titled “Example 1: Atomic read with Top-level Group”](#example-1-atomic-read-with-top-level-group)
Given the *Asset ID* `12345`, the client:
1. Call `arc89_get_metadata_pagination` with `asset_id=12345`, ASA Metadata Registry returns:
* The total Metadata byte size (`uint16`);
* The `PAGE_SIZE` (`uint16`), as defined in the [Metadata Pagination section](#pagination);
* The total Metadata pages `N` (`uint8`).
2. Check that `N ≤ MAX_TXN_PER_GROUP`.
3. Group call `N * arc89_get_metadata` with `asset_id=12345` and `page=0...N-1` (0-based).
**PROS:**
* Best UX, no delay, atomic fetch guarantees integrity and no data drift.
**CONS:**
* The fetchable `metadata_size` is capped by `MAX_TXN_PER_GROUP` capacity for a Top-level Group, (while Inner Groups can fetch up to `MAX_METADATA_SIZE`).
##### Example 2: Sequential read, while “has next” page
[Section titled “Example 2: Sequential read, while “has next” page”](#example-2-sequential-read-while-has-next-page)
Given the *Asset ID* `12345`, the client:
1. Call `arc89_get_metadata` with `asset_id=12345` and `page=0` (0-based), ASA Metadata Registry returns:
* A *has next* (`bool`) flag indicating if more pages exist;
* The [Last Modified Round](#last-modified-round) monotonic counter;
* Exactly `PAGE_SIZE` bytes of Metadata (or fewer on the last page), as defined in the [Metadata Pagination section](#pagination).
2. While *has next* page, call `arc89_get_metadata` with `asset_id=12345` and incremented `page`, verifying Last Modified Round is unchanged.
**PROS:**
* No arithmetic on the caller; just loop while *has next*.
**CONS:**
* Caller doesn’t know the total Metadata length or pages upfront.
* Callers that want progress bars have to either read page `0` first or call the separate `arc89_get_metadata_pagination` method.
**BEST FOR:**
* Wallets and explorers that stream progressively and don’t care about total Metadata length until finished.
##### Example 3: Sequential read, two-call pattern
[Section titled “Example 3: Sequential read, two-call pattern”](#example-3-sequential-read-two-call-pattern)
Given the *Asset ID* `12345`, the client:
1. Call `arc89_get_metadata_pagination` with `asset_id=12345`, ASA Metadata Registry returns:
* The total Metadata byte size (`uint16`);
* The `PAGE_SIZE` (`uint16`), as defined in the [Metadata Pagination section](#pagination);
* The total Metadata pages `N` (`uint8`).
2. Loop calls `arc89_get_metadata` with `asset_id=12345` and `page=0...N-1` (0-based), verifying [Last Modified Round](#last-modified-round) is unchanged.
**PROS:**
* Changes of `PAGE_SIZE` in the future won’t break readers;
* Improves UX (progress, preallocation).
**CONS:**
* Requires two round trips in the common case.
**BEST FOR:**
* Latency-tolerant clients and SDKs that value clarity and future-proofing.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The authorization to create the Asset Metadata and update and delete *mutable* Asset Metadata is granted to the ASA Manager Address to preserve the ASA trust model. The authorization is not granted to the ASA Creator Address, since this role could be performed programmatically by Applications and is not supposed to be the long-lasting maintainer of the ASA.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# URI scheme
> Consolidated specification for encoding Algorand transactions and queries as URIs.
## Abstract
[Section titled “Abstract”](#abstract)
This ARC defines a unified Algorand URI scheme that covers payment transactions, key registration, application NoOp calls, and read-only blockchain queries. It expands on earlier URI specifications to support deeplinks, QR codes, and other contexts where structured URIs communicate transaction intent or state queries.
## Motivation
[Section titled “Motivation”](#motivation)
This ARC consolidates and supersedes [ARC-26](/arc-standards/arc-0026), [ARC-78](/arc-standards/arc-0078), [ARC-79](/arc-standards/arc-0079), and [ARC-82](/arc-standards/arc-0082). Unifying their technical details avoids divergence across implementations, ensures extensions share consistent encoding rules, and provides a single reference for wallet, application, and tooling authors.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
Algorand URIs follow the general format for URIs as set forth in [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). The path component consists of an Algorand address, and the query component provides additional parameters specific to the encoded intent.
### ABNF Overview
[Section titled “ABNF Overview”](#abnf-overview)
The productions below consolidate the syntax for all Algorand URI variants. Scheme-specific sections reference these shared rules when describing their parameters.
```abnf
; Core
algorandaddress = *base32
appid = *digit
assetid = *digit
; qchar corresponds to RFC 3986 query characters excluding "=" and "&"
; qbase64url matches the unpadded base64url alphabet from RFC 4648 section 5
qbase64url = 1*(ALPHA / DIGIT / "-" / "_")
alabel = 1*(ALPHA / DIGIT / "-" / "_" / ".")
noteparam = "note=" *qchar
xnote = "xnote=" *qchar
feeparam = "fee=" *digit
otherparam = qchar *qchar [ "=" *qchar ]
; Network authority selectors
netauth = ghlabel / netlabel
ghlabel = "gh:" 1*qbase64url
netlabel = "net:" ( "testnet" / "betanet" / alabel )
; Payment transactions (ARC-26)
paymenturn = "algorand://" [ netauth "/" ] algorandaddress [ "?" paymentparams ]
paymentparams = paymentparam *( "&" paymentparam )
paymentparam = amountparam / labelparam / noteparam / xnote / assetparam / otherparam
amountparam = "amount=" *digit
labelparam = "label=" *qchar
assetparam = "asset=" *digit
; Key registration transactions (ARC-78)
keyregurn = "algorand://" [ netauth "/" ] algorandaddress [ "?" keyregparams ]
keyregparams = keyregparam *( "&" keyregparam )
keyregparam = typekeyreg / votekeyparam / selkeyparam / sprfkeyparam / votefstparam / votelstparam / votekdparam / noteparam / xnote / feeparam / otherparam
typekeyreg = "type=keyreg"
votekeyparam = "votekey=" *qbase64url
selkeyparam = "selkey=" *qbase64url
sprfkeyparam = "sprfkey=" *qbase64url
votefstparam = "votefst=" *digit
votelstparam = "votelst=" *digit
votekdparam = "votekdkey=" *digit
; Application NoOp call transactions (ARC-79)
noopurn = "algorand://" [ netauth "/" ] algorandaddress [ "?" noopparams ]
noopparams = noopparam *( "&" noopparam )
noopparam = typeappl / appparam / methodparam / argparam / boxparam / assetparam / accountparam / feeparam / noteparam / xnote / otherparam
typeappl = "type=appl"
appparam = "app=" *digit
methodparam = "method=" *qchar
argparam = "arg=" *qchar
boxparam = "box=" *qbase64url
accountparam = "account=" *base32
; Application state queries (ARC-82 application mode)
appqueryurn = "algorand://" [ netauth "/" ] "app/" appid [ "?" appqueryparams ]
appqueryparams = appqueryparam *( "&" appqueryparam )
appqueryparam = boxparam / globalparam / localparam / algaddrparam / tealcodeparam / otherparam
globalparam = "global=" *qbase64url
localparam = "local=" *qbase64url
algaddrparam = "algorandaddress=" *base32
tealcodeparam = "tealcode"
; Asset metadata queries (ARC-82 asset mode)
assetqueryurn = "algorand://" [ netauth "/" ] "asset/" assetid [ "?" assetqueryparams ]
assetqueryparams = assetqueryparam *( "&" assetqueryparam )
assetqueryparam = totalparam / decimalsparam / frozenparam / unitnameparam / assetnameparam / urlparam / metadatahashparam / managerparam / reserveparam / freezeparam / clawbackparam / otherparam
totalparam = "total"
decimalsparam = "decimals"
frozenparam = "frozen"
unitnameparam = "unitname"
assetnameparam = "assetname"
urlparam = "url"
metadatahashparam = "metadatahash"
managerparam = "manager"
reserveparam = "reserve"
freezeparam = "freeze"
clawbackparam = "clawback"
```
Elements of the query component may contain characters outside the valid range. These must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986.
Here, “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators.
The scheme component (“algorand:”) is case-insensitive, and implementations MUST accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys.
> Encoding Rules for qchar Values
>
> Parameters containing text or binary data (e.g. note) MUST be encoded according to [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986). Characters outside the unreserved URI set — including ”=”, ”&”, ”%”, and any non-ASCII or binary bytes — MUST first be UTF-8 encoded and then percent-encoded (%XX format).
>
> Implementations MUST NOT treat raw ”=” or ”&” inside values as literal characters, since these delimit query parameters.
```plaintext
note=foo%3Dbar%26baz ; represents "foo=bar&baz"
note=%00%FF%AA ; arbitrary binary bytes (hex 00 FF AA)
note=Donation%20for%20Event ; spaces encoded as %20
```
### Common URI Format
[Section titled “Common URI Format”](#common-uri-format)
#### Network Selection via Authority
[Section titled “Network Selection via Authority”](#network-selection-via-authority)
All Algorand URI variants encode the target network in the authority component instead of relying on query parameters. The authority, when present, MUST use one of the following prefixes:
* `gh:`: Authoritative selector carrying the unpadded base64url encoding (per [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5)) of the 32-byte genesis hash. Clients MUST validate and honor this selector.
* `net:`: Advisory alias (e.g., `testnet`, `betanet`, or deployment-specific labels) that clients MAY resolve to a known genesis hash.
When the authority is absent (i.e., `algorand://` is followed immediately by the resource path), clients MUST assume the canonical Algorand network implied by legacy authority-free URIs. For avoidance of doubt, the canonical network corresponds to Algorand MainNet. To preserve backward compatibility, emitters targeting that canonical network MUST omit the authority entirely. This rearrangement moves `app` and `asset` identifiers out of the authority and into the leading path segment, so resolvers that previously looked for those tokens in the authority MUST be updated.
Because authority-free URIs remain unchanged, applications that exclusively target the canonical network continue to generate and parse the exact same strings. Only resolvers and emitters that work with alternative networks need to adopt the `gh:` or `net:` authorities introduced here.
#### Client Resolution Algorithm
[Section titled “Client Resolution Algorithm”](#client-resolution-algorithm)
* If the authority begins with `gh:`, resolve to that network and validate the hash length.
* Else if the authority begins with `net:`, map the alias to a locally known genesis hash; if the alias is unknown, treat the URI as invalid.
* Else (no authority present), assume the client’s configured canonical Algorand genesis hash; if such configuration is missing, treat the URI as invalid.
#### Network Selection Examples
[Section titled “Network Selection Examples”](#network-selection-examples)
Implicit default network:
```plaintext
algorand://asset/31566704?total
```
TestNet authoritative hash:
```plaintext
algorand://gh:/app/421337?local=bG9j
```
Private network alias:
```plaintext
algorand://net:myco-devnet/asset/31566704?total
```
Conflict example (invalid alias):
```plaintext
algorand://net:unknown-net/asset/31566704?total
```
#### Migration & Compatibility
[Section titled “Migration & Compatibility”](#migration--compatibility)
* Parsers that expect `app` or `asset` in the authority for non-canonical networks will break. Implementations SHOULD accept legacy query-based selectors during a transition period but MUST emit the new authority-based form.
* When a legacy form is detected, apply the Option A semantics (query `gh`/`net` selectors with a canonical default) to preserve backwards compatibility.
* Authority-free canonical network URIs remain valid and identical, so tooling that only targets that network requires no changes.
* Emitters targeting the canonical network MUST omit the authority.
#### Trade-offs
[Section titled “Trade-offs”](#trade-offs)
* **Pros**: Encodes the network as part of the hierarchical identity, keeps the default authority-free form compact, and aligns with resolver or gateway architectures that are keyed by network.
* **Cons**: Requires ecosystem updates because the authority semantics change and the migration story is more involved than the query-based approach.
### Compliance Fragment
[Section titled “Compliance Fragment”](#compliance-fragment)
Implementations that emit Algorand URIs and need to declare conformance with multiple ARC MUST encode that declaration in the URI fragment using the pattern `#arc++...`, where every letter represents an unpadded decimal ARC number listed in strictly ascending order. Only the first entry MAY carry the `arc` literal; subsequent entries MUST be bare numbers separated by `+` and no other separators or padding are allowed. For example, `#arc26+27` is valid, while `#arc26+arc27`, `#arc026+27`, and `#arc27+26` are invalid.
**As a special case, declarations that include [ARC-3](/arc-standards/arc-0003) MUST list ARC-3 as the sole entry, using the fragment `#arc3` without any additional ARC identifiers.**
Consumers of this scheme SHOULD treat fragments that do not follow this structure as non-compliant and ignore the multi-ARC declaration they attempt to convey.
#### Example — Declaring Multi-ARC Compliance
[Section titled “Example — Declaring Multi-ARC Compliance”](#example--declaring-multi-arc-compliance)
Implementations MAY use the URI fragment to indicate which ARC standards the resource conforms to.
The following illustration uses two hypothetical future ARCs. A resource that conforms to hypothetical ARC-X and ARC-Y could expose a URI such as:
`algorand://app/123456?box=AAAAAAAAAAAAA#arcX+Y` In this case, the fragment value `arcX+Y` declares that the asset metadata conforms to both hypothetical ARC-X and ARC-Y, where `X` and `Y` are ARC numbers.
Clients interpreting this fragment SHOULD:
1. Strip the `#arc` prefix and split the remainder by the `"+"` separator.\
Example: `"arcX+Y"` → `[X, Y]`
2. Treat the resulting list as the set of supported ARC identifiers.
3. Optionally fetch or reference the corresponding ARC specification documents,\
e.g.:
* ARC-X (hypothetical)
* ARC-Y (hypothetical)
4. Preserve order if present, though ARC numbers **SHOULD** be listed in ascending order for canonical form.\
Clients **MUST** accept any order.
Implementations MAY use the following regular expression to validate fragment values while enforcing the no-leading-zero requirement and forbidding duplicate `#arc` prefixes:
```regex
^(?!.*#arc.*#arc).*#arc(?!0\d)\d+(?:\+(?!0\d)\d+)*$
```
This mechanism ensures that multiple ARC declarations in a URI fragment can be parsed, validated, and cross-referenced unambiguously.
### Transaction URIs
[Section titled “Transaction URIs”](#transaction-uris)
The base payment URI encoding provides a standardized way for applications and websites to express payment intent through deeplinks, QR codes, and similar transports. It is heavily based on Bitcoin’s [BIP-0021](https://github.com/bitcoin/bips/blob/master/bip-0021.mediawiki) so existing tooling can adapt with minimal changes. The optional URI authority selects the network (`gh:` or `net:`); when omitted, clients assume the canonical network.
The ABNF overview defines `paymenturn`, `paymentparams`, and `paymentparam`, which extend the shared productions with the payment-specific keys described below.
#### Query Keys
[Section titled “Query Keys”](#query-keys)
* label: Label for that address (e.g. name of receiver)
* address: Algorand address
* xnote: A URL-encoded notes field value that must not be modifiable by the user when displayed to users.
* note: A URL-encoded default notes field value that the user interface may optionally make editable by the user.
* amount: microAlgos or smallest unit of asset
* asset: The asset id this request refers to (if Algos, simply omit this parameter)
* (others): optional, for future extensions
#### Transfer Amount and Size
[Section titled “Transfer Amount and Size”](#transfer-amount-and-size)
!!! Note This is DIFFERENT than Bitcoin’s BIP-0021
If an amount is provided, it MUST be specified in basic unit of the asset. For example, if it’s Algos (Algorand native unit), the amount MUST be specified in microAlgos. All amounts MUST NOT contain commas nor a period (.) - strictly non-negative integers.
For 100 Algos, the amount needs to be 100000000. For 54.1354 Algos the amount needs to be 54135400.
Algorand clients SHOULD display the amount in whole Algos. Where needed, microAlgos MAY be used as well. In any case, the units SHALL be clear for the user.
#### Examples
[Section titled “Examples”](#examples)
Address:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4
```
Address with label:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?label=Silvio
```
Request 150.5 Algos from an address:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150500000
```
Request 150 units of Asset ID 45 from an address:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150&asset=45
```
### Key Registration Transaction URIs
[Section titled “Key Registration Transaction URIs”](#key-registration-transaction-uris)
This extension to the base Algorand URI scheme defines how to encode key registration transactions so they can be shared through deeplinks, QR codes, and similar mechanisms while remaining compatible with the payment format introduced in [ARC-26](/arc-standards/arc-0026). Network selection uses the same authority-based mechanism described in the common format section.
The ABNF overview defines `keyregurn`, `keyregparams`, and `keyregparam`, which extend the shared productions with the key registration-specific keys enumerated below.
#### Scope
[Section titled “Scope”](#scope)
This section explicitly supports the two major subtypes of key registration transactions:
* Online keyreg transaction
* Declares intent to participate in consensus and configures required keys
* Offline keyreg transaction
* Declares intent to stop participating in consensus
The following variants of keyreg transactions are not defined:
* Non-participating keyreg transaction
* This transaction subtype is considered deprecated
* Heartbeat keyreg transaction
* This transaction subtype will be included in the future block incentives protocol. The protocol specifies that this transaction type must be submitted by a node in response to a programmatic “liveness challenge”. It is not meant to be signed or submitted by an end user.
#### Query Keys
[Section titled “Query Keys”](#query-keys-1)
* address: Algorand address of transaction sender. Required.
* type: fixed to “keyreg”. Used to disambiguate the transaction type from the base [ARC-26](/arc-standards/arc-0026) standard and other possible extensions. Required.
* votekeyparam: The vote key parameter to use in the transaction. Encoded with [base64url](https://www.rfc-editor.org/rfc/rfc4648.html#section-5). Required for keyreg online transactions.
* selkeyparam: The selection key parameter to use in the transaction. Encoded with base64url. Required for keyreg online transactions.
* sprfkeyparam: The state proof key parameter to use in the transaction. Encoded with base64url. Required for keyreg online transactions.
* votefstparam: The first round on which the voting keys will be valid. Required for keyreg online transactions.
* votelstparam: The last round on which the voting keys will be valid. Required for keyreg online transactions.
* votekdparam: The key dilution key parameter to use. Required for keyreg online transactions.
* xnote: As in [ARC-26](/arc-standards/arc-0026). A URL-encoded notes field value that must not be modifiable by the user when displayed to users. Optional.
* note: As in [ARC-26](/arc-standards/arc-0026). A URL-encoded default notes field value that the user interface may optionally make editable by the user. Optional.
* fee: OPTIONAL. A static fee to set for the transaction in microAlgos. Useful to signal intent to receive participation incentives (e.g. with a 2,000,000 microAlgo transaction fee.)
* (others): optional, for future extensions
#### Examples
[Section titled “Examples”](#examples-1)
Encoding keyreg online transaction with minimum fee:
```plaintext
{
"txn": {
"fee": 1000,
"fv": 1345,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 2345,
"selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=",
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==",
"type": "keyreg",
"votefst": 1300,
"votekd": 100,
"votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=",
"votelst": 11300
}
}
```
Results in:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?
type=keyreg
&selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c
&sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg
&votefst=1300
&votekd=100
&votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI
&votelst=11300
```
Note: newlines added for readability.
Note the difference between base64 encoding in the raw object and base64url encoding in the URI parameters. For example, the selection key parameter `selkey` that begins with `+lfw+` in the raw object is encoded in base64url to `-lfw-` in the URI.
Note: Here, the fee is omitted from the URI (due to being set to the minimum 1,000 microAlgos.) When the fee is omitted, it is left up to the application or wallet to decide. This is for demonstrative purposes; the specification does not require this behavior.
Encoding keyreg offline transaction:
```plaintext
{
"txn": {
"fee": 1000,
"fv": 1776240,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 1777240,
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"type": "keyreg"
}
}
```
Results in:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?type=keyreg
```
This offline keyreg transaction encoding is the smallest compatible representation.
Encoding keyreg online transaction with custom fee and note:
```plaintext
{
"txn": {
"fee": 2000000,
"fv": 1345,
"gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=",
"lv": 2345,
"note:b64": "Q29uc2Vuc3VzIHBhcnRpY2lwYXRpb24gZnR3",
"selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=",
"snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=",
"sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==",
"type": "keyreg",
"votefst": 1300,
"votekd": 100,
"votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=",
"votelst": 11300
}
}
```
Results in:
```plaintext
algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?
type=keyreg
&selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c
&sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg
&votefst=1300
&votekd=100
&votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI
&votelst=11300
&fee=2000000
¬e=Consensus%2Bparticipation%2Bftw
```
Note: newlines added for readability.
### Application NoOp Call URIs
[Section titled “Application NoOp Call URIs”](#application-noop-call-uris)
NoOp calls are generic application calls that execute an Algorand smart contract’s approval program. This URI extension encodes the transactions so wallets, dApps, and services can invoke specific application methods using deeplinks and QR codes while remaining consistent with [ARC-26](/arc-standards/arc-0026). As with other URI types, the optional authority selects the network.
The ABNF overview defines `noopurn`, `noopparams`, and `noopparam`, which extend the shared productions with the application call keys described below.
As in [ARC-26](/arc-standards/arc-0026), URIs follow the general format for URIs as set forth in RFC 3986. The path component consists of an Algorand address, and the query component provides additional transaction parameters.
Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) MUST first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence MUST be percent-encoded as described in RFC 3986. The binary components (args, refs, etc.) MUST be encoded with base64url as specified in [RFC 4648 section 5](https://www.rfc-editor.org/rfc/rfc4648.html#section-5).
#### Query Keys
[Section titled “Query Keys”](#query-keys-2)
* address: Algorand address of transaction sender
* type: fixed to “appl”. Used to disambiguate the transaction type from the base [ARC-26](/arc-standards/arc-0026) standard and other possible extensions
* app: The first reference is set to specify the called application (Algorand Smart Contract) ID and is mandatory. Additional references are optional and will be used in the Application NoOp call’s foreign applications array.
* method: Specify the full method expression (e.g. “example\_method(uint64,uint64)void”).
* arg: Specify arguments used for calling the NoOp method, to be encoded within URI.
* box: Box references to be used in Application NoOp method call box array.
* asset: Asset reference to be used in Application NoOp method call foreign assets array.
* account: Account or NFD address to be used in Application NoOp method call foreign accounts array.
* fee: OPTIONAL. An optional static fee to set for the transaction in microAlgos.
* (others): optional, for future extensions
Note: If the fee is omitted, it means that Minimum Fee is preferred to be used for the transaction.
#### Template URI vs Actionable URI
[Section titled “Template URI vs Actionable URI”](#template-uri-vs-actionable-uri)
If the URI is constructed so that other dApps, wallets or protocols could use it with their runtime Algorand entities of interest, then the placeholder account/app address in the URI MUST be ZeroAddress (“AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ”). Since ZeroAddress cannot initiate any action this approach is considered non-vulnerable and secure.
#### Examples
[Section titled “Examples”](#examples-2)
Call `claim(uint64,uint64)byte[]` on contract 11111111 paying a fee of 10000 microAlgos from a specific address:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&fee=10000
```
Call the same method paying the default 1000 microAlgo fee while providing additional foreign applications:
```plaintext
algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&app=22222222&app=33333333
```
### Blockchain Query URIs
[Section titled “Blockchain Query URIs”](#blockchain-query-uris)
This read-only URI extension defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that retrieve application state, box data, and asset metadata in a structured format. The design is inspired by [ARC-26](/arc-standards/arc-0026) and reuses its core URI principles for consistency.
Algorand URIs in this section follow the general format for URIs as defined in RFC 3986. The authority optionally selects the network (`gh:` or `net:`), while the leading path segment specifies whether the URI targets an application (`.../app/`) or an asset (`.../asset/`). Query parameters define the specific data fields being requested. Parameters MAY contain characters outside the valid range. These MUST first be encoded in UTF-8, then percent-encoded according to RFC 3986.
The ABNF overview defines `appqueryurn`, `appqueryparam`, `assetqueryurn`, and `assetqueryparam`, which extend the shared productions with the query keys summarized below.
#### Application Query URIs (`algorand://app`)
[Section titled “Application Query URIs (algorand://app)”](#application-query-uris-algorandapp)
The application URI allows querying the state of an application, including data from the application’s box storage, global storage, and local storage, as well as the TEAL program associated with it. Each storage type has specific requirements.
#### Asset Query URIs (`algorand://asset`)
[Section titled “Asset Query URIs (algorand://asset)”](#asset-query-uris-algorandasset)
The asset URI enables retrieval of metadata and configuration details for a specific asset, such as its name, total supply, decimal precision, and associated addresses.
#### Parameter Definitions
[Section titled “Parameter Definitions”](#parameter-definitions)
**Application parameters**
* `box`: Queries the application’s box storage with a key encoded in base64url.
* `global`: Queries the global storage of the application using a base64url-encoded key.
* `local`: Queries local storage for a specified account. Requires an additional `algorandaddress` parameter, representing the account whose local storage is queried.
* `algorandaddress`: Supplies the account whose local storage should be inspected when paired with `local`.
* `tealcode`: Requests the TEAL program associated with the application.
**Asset parameters**
* `total`: Queries the total supply of the asset.
* `decimals`: Queries the number of decimal places used for the asset.
* `frozen`: Queries whether the asset is frozen by default.
* `unitname`: Queries the short name or unit symbol of the asset (e.g., “USDT”).
* `assetname`: Queries the full name of the asset (e.g., “Tether”).
* `url`: Queries the URL associated with the asset, providing more information.
* `metadatahash`: Queries the metadata hash associated with the asset.
* `manager`: Queries the address of the asset manager.
* `reserve`: Queries the reserve address holding non-minted units of the asset.
* `freeze`: Queries the freeze address for the asset.
* `clawback`: Queries the clawback address for the asset.
#### Query Key Descriptions
[Section titled “Query Key Descriptions”](#query-key-descriptions)
For each parameter, the query key name is listed, followed by its purpose:
* `box`: Retrieves information from the specified box storage key.
* `global`: Retrieves data from the specified global storage key.
* `local`: Retrieves data from the specified local storage key. Requires `algorandaddress` to specify the account.
* `total`: Retrieves the asset’s total supply.
* `decimals`: Retrieves the number of decimal places for the asset.
* `frozen`: Retrieves the default frozen status of the asset.
* `unitname`: Retrieves the asset’s short name or symbol.
* `assetname`: Retrieves the full name of the asset.
* `url`: Retrieves the URL associated with the asset.
* `metadatahash`: Retrieves the metadata hash for the asset.
* `manager`: Retrieves the manager address of the asset.
* `reserve`: Retrieves the reserve address for the asset.
* `freeze`: Retrieves the freeze address of the asset.
* `clawback`: Retrieves the clawback address of the asset.
#### Examples
[Section titled “Examples”](#examples-3)
Query an application’s box storage:
```plaintext
algorand://app/2345?box=YWxnb3JvbmQ=
```
Query global storage:
```plaintext
algorand://app/12345?global=Z2xvYmFsX2tleQ==
```
Query local storage for a specific address:
```plaintext
algorand://app/12345?local=bG9jYWxfa2V5&algorandaddress=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567
```
Query the total supply of an asset:
```plaintext
algorand://asset/67890?total
```
## Rationale
[Section titled “Rationale”](#rationale)
The present aims to provide a standardized way to encode key registration transactions in order to enhance the user experience of signing key registration transactions in general, and in particular in the use case of an Algorand node runner that does not have their spending keys resident on their node (as is best practice). The parameter names were chosen to match the corresponding names in encoded key registration transactions.
Algorand application NoOp method calls cover the majority of application transactions in Algorand and have a wide range of use-cases. For use-cases where the runtime knows exactly what the called application needs in terms of arguments and transaction arrays and there are no direct interactions, this extension is required since the original ARC-26 standard did not support application calls.
Previously, the Algorand URI scheme was primarily used to create transactions on the chain. Extending it to cover read-only queries allows a URI scheme to directly retrieve information from the chain, specifically for applications and assets. This provides a unified, standardized method for querying Algorand application and asset data, allowing interoperability across applications and services.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
This ARC replaces ARCs 26, 78, 79, and 82 without invalidating previously generated URIs. Existing URIs that conform to the earlier specifications remain valid under this consolidated definition, so no backwards incompatibilities are introduced beyond the deprecation of the superseded documents. For network selection, implementations MAY continue to accept the legacy query-based selectors during a migration period but SHOULD emit the authority-based form specified above.
## Reference Implementation
[Section titled “Reference Implementation”](#reference-implementation)
None.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The transaction-related sections of this specification introduce no additional security considerations beyond those identified in the originating ARCs.
Since the blockchain query URIs are intended for read-only operations, they do not alter application or asset state, mitigating many security risks. However, data retrieved from these URIs should be validated to ensure it meets user expectations and that any displayed data cannot be tampered with.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# Algorand Smart Contract Token Specification
> Base specification for tokens implemented as smart contracts
## Abstract
[Section titled “Abstract”](#abstract)
This ARC (Algorand Request for Comments) specifies an interface for tokens to be implemented on Algorand as smart contracts. The interface defines a minimal interface required for tokens to be held and transferred, with the potential for further augmentation through additional standard interfaces and custom methods.
## Motivation
[Section titled “Motivation”](#motivation)
Currently, most tokens in the Algorand ecosystem are represented by ASAs (Algorand Standard Assets). However, to provide rich extra functionality, it can be desirable to implement tokens as smart contracts instead. To foster an interoperable token ecosystem, it is necessary that the core interfaces for tokens be standardized.
## Specification
[Section titled “Specification”](#specification)
The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in [RFC-2119](https://www.ietf.org/rfc/rfc2119.txt).
### Core Token specification
[Section titled “Core Token specification”](#core-token-specification)
A smart contract token that is compliant with this standard MUST implement the following interface:
```json
{
"name": "ARC-200",
"desc": "Smart Contract Token Base Interface",
"methods": [
{
"name": "arc200_name",
"desc": "Returns the name of the token",
"readonly": true,
"args": [],
"returns": { "type": "byte[32]", "desc": "The name of the token" }
},
{
"name": "arc200_symbol",
"desc": "Returns the symbol of the token",
"readonly": true,
"args": [],
"returns": { "type": "byte[8]", "desc": "The symbol of the token" }
},
{
"name": "arc200_decimals",
"desc": "Returns the decimals of the token",
"readonly": true,
"args": [],
"returns": { "type": "uint8", "desc": "The decimals of the token" }
},
{
"name": "arc200_totalSupply",
"desc": "Returns the total supply of the token",
"readonly": true,
"args": [],
"returns": { "type": "uint256", "desc": "The total supply of the token" }
},
{
"name": "arc200_balanceOf",
"desc": "Returns the current balance of the owner of the token",
"readonly": true,
"args": [
{
"type": "address",
"name": "owner",
"desc": "The address of the owner of the token"
}
],
"returns": {
"type": "uint256",
"desc": "The current balance of the holder of the token"
}
},
{
"name": "arc200_transfer",
"desc": "Transfers tokens",
"readonly": false,
"args": [
{
"type": "address",
"name": "to",
"desc": "The destination of the transfer"
},
{
"type": "uint256",
"name": "value",
"desc": "Amount of tokens to transfer"
}
],
"returns": { "type": "bool", "desc": "Success" }
},
{
"name": "arc200_transferFrom",
"desc": "Transfers tokens from source to destination as approved spender",
"readonly": false,
"args": [
{
"type": "address",
"name": "from",
"desc": "The source of the transfer"
},
{
"type": "address",
"name": "to",
"desc": "The destination of the transfer"
},
{
"type": "uint256",
"name": "value",
"desc": "Amount of tokens to transfer"
}
],
"returns": { "type": "bool", "desc": "Success" }
},
{
"name": "arc200_approve",
"desc": "Approve spender for a token",
"readonly": false,
"args": [
{ "type": "address", "name": "spender" },
{ "type": "uint256", "name": "value" }
],
"returns": { "type": "bool", "desc": "Success" }
},
{
"name": "arc200_allowance",
"desc": "Returns the current allowance of the spender of the tokens of the owner",
"readonly": true,
"args": [
{ "type": "address", "name": "owner" },
{ "type": "address", "name": "spender" }
],
"returns": { "type": "uint256", "desc": "The remaining allowance" }
}
],
"events": [
{
"name": "arc200_Transfer",
"desc": "Transfer of tokens",
"args": [
{
"type": "address",
"name": "from",
"desc": "The source of transfer of tokens"
},
{
"type": "address",
"name": "to",
"desc": "The destination of transfer of tokens"
},
{
"type": "uint256",
"name": "value",
"desc": "The amount of tokens transferred"
}
]
},
{
"name": "arc200_Approval",
"desc": "Approval of tokens",
"args": [
{
"type": "address",
"name": "owner",
"desc": "The owner of the tokens"
},
{
"type": "address",
"name": "spender",
"desc": "The approved spender of tokens"
},
{
"type": "uint256",
"name": "value",
"desc": "The amount of tokens approve"
}
]
}
]
}
```
Ownership of a token by a zero address indicates that a token is out of circulation indefinitely, or otherwise burned or destroyed.
The methods `arc200_transfer` and `arc200_transferFrom` method MUST error when the balance of `from` is insufficient. In the case of the `arc200_transfer` method, from is implied as the `owner` of the token. The `arc200_transferFrom` method MUST error unless called by an `spender` approved by an `owner`. The methods `arc200_transfer` and `arc200_transferFrom` MUST emit a `Transfer` event. A `arc200_Transfer` event SHOULD be emitted, with `from` being the zero address, when a token is minted. A `arc200_Transfer` event SHOULD be emitted, with `to` being the zero address, when a token is destroyed.
The `arc200_Approval` event MUST be emitted when an `arc200_approve` or `arc200_transferFrom` method is called successfully.
A value of zero for the `arc200_approve` method and the `arc200_Approval` event indicates no approval. The `arc200_transferFrom` method and the `arc200_Approval` event indicates the approval value after it is decremented.
The contract MUST allow multiple operators per owner.
All methods in this standard that are marked as `readonly` MUST be read-only as defined by [ARC-22](/arc-standards/arc-0022).
## Rationale
[Section titled “Rationale”](#rationale)
This specification is based on [ERC-20](https://eips.ethereum.org/EIPS/eip-20).
### Core Specification
[Section titled “Core Specification”](#core-specification)
The core specification identical to ERC-20.
## Backwards Compatibility
[Section titled “Backwards Compatibility”](#backwards-compatibility)
This standard introduces a new kind of token that is incompatible with tokens defined as ASAs. Applications that want to index, manage, or view tokens on Algorand will need to handle these new smart tokens as well as the already popular ASA implementation of tokens will need to add code to handle both, and existing smart contracts that handle ASA-based tokens will not work with these new smart contract tokens.
While this is a severe backward incompatibility, smart contract tokens are necessary to provide richer and more diverse functionality for tokens.
## Security Considerations
[Section titled “Security Considerations”](#security-considerations)
The fact that anybody can create a new implementation of a smart contract tokens standard opens the door for many of those implementations to contain security bugs. Additionally, malicious token implementations could contain hidden anti-features unexpected by users. As with other smart contract domains, it is difficult for users to verify or understand the security properties of smart contract tokens. This is a tradeoff compared with ASA tokens, which share a smaller set of security properties that are easier to validate to gain the possibility of adding novel features.
## Copyright
[Section titled “Copyright”](#copyright)
Copyright and related rights waived via [CCO](https://creativecommons.org/publicdomain/zero/1.0/).
# ARC Category Guidelines
> ARCs by categories
Welcome to the Guideline. Here you’ll find information on which ARCs to use for your project.
## General ARCs
[Section titled “General ARCs”](#general-arcs)
### ARC 0 - ARC Purpose and Guidelines
[Section titled “ARC 0 - ARC Purpose and Guidelines”](#arc-0---arc-purpose-and-guidelines)
#### What is an ARC?
[Section titled “What is an ARC?”](#what-is-an-arc)
ARC stands for Algorand Request for Comments. An ARC is a design document providing information to the Algorand community or describing a new feature for Algorand or its processes or environment. The ARC should provide a concise technical specification and a rationale for the feature. The ARC author is responsible for building consensus within the community and documenting dissenting opinions. We intend ARCs to be the primary mechanisms for proposing new features and collecting community technical input on an issue. We maintain ARCs as text files in a versioned repository. Their revision history is the historical record of the feature proposal.
### ARC 26 - URI scheme
[Section titled “ARC 26 - URI scheme”](#arc-26---uri-scheme)
This URI specification represents a standardized way for applications and websites to send requests and information through deeplinks, QR codes, etc. It is heavily based on Bitcoin’s [BIP-0021](https://github.com/bitcoin/bips/blob/master/bip-0021.mediawiki) and should be seen as derivative of it. The decision to base it on BIP-0021 was made to make it easy and compatible as possible for any other application.
### ARC 65 - AVM Run Time Errors In Program
[Section titled “ARC 65 - AVM Run Time Errors In Program”](#arc-65---avm-run-time-errors-in-program)
This document introduces a convention for rising informative run time errors on the Algorand Virtual Machine (AVM) directly from the program bytecode.
### ARC 78 - URI scheme, keyreg Transactions extension
[Section titled “ARC 78 - URI scheme, keyreg Transactions extension”](#arc-78---uri-scheme-keyreg-transactions-extension)
This URI specification represents an extension to the base Algorand URI encoding standard ([ARC-26](/arc-standards/arc-0026)) that specifies encoding of key registration transactions through deeplinks, QR codes, etc.
### ARC 79 - URI scheme, App NoOp call extension
[Section titled “ARC 79 - URI scheme, App NoOp call extension”](#arc-79---uri-scheme-app-noop-call-extension)
NoOp calls are Generic application calls to execute the Algorand smart contract ApprovalPrograms. This URI specification proposes an extension to the base Algorand URI encoding standard ([ARC-26](/arc-standards/arc-0026)) that specifies encoding of application NoOp transactions into [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986) standard URIs.
### ARC 82 - URI scheme blockchain information
[Section titled “ARC 82 - URI scheme blockchain information”](#arc-82---uri-scheme-blockchain-information)
This URI specification defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that allow users to retrieve data such as application state and asset metadata in a structured format. This specification is inspired by [ARC-26](/arc-standards/arc-0026) and follows similar principles, with adjustments specific to read-only queries for applications and assets.
### ARC 83 - xGov Council - Application Process
[Section titled “ARC 83 - xGov Council - Application Process”](#arc-83---xgov-council---application-process)
The goal of this ARC is to clearly define the process for running for an xGov Council seat.
### ARC 86 - xGov status and voting power
[Section titled “ARC 86 - xGov status and voting power”](#arc-86---xgov-status-and-voting-power)
This ARC defines the Expert Governor (xGov) status and voting power in the Algorand Expert Governance.
### ARC 90 - URI scheme
[Section titled “ARC 90 - URI scheme”](#arc-90---uri-scheme)
This ARC defines a unified Algorand URI scheme that covers payment transactions, key registration, application NoOp calls, and read-only blockchain queries. It expands on earlier URI specifications to support deeplinks, QR codes, and other contexts where structured URIs communicate transaction intent or state queries.
## Asa ARCs
[Section titled “Asa ARCs”](#asa-arcs)
### ARC 3 - Conventions Fungible/Non-Fungible Tokens
[Section titled “ARC 3 - Conventions Fungible/Non-Fungible Tokens”](#arc-3---conventions-fungiblenon-fungible-tokens)
The goal of these conventions is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to display the properties of a given ASA.
### ARC 16 - Convention for declaring traits of an NFT’s
[Section titled “ARC 16 - Convention for declaring traits of an NFT’s”](#arc-16---convention-for-declaring-traits-of-an-nfts)
The goal is to establish a standard for how traits are declared inside a non-fungible NFT’s metadata, for example as specified in ([ARC-3](/arc-standards/arc-0003)), ([ARC-69](/arc-standards/arc-0069)) or ([ARC-72](/arc-standards/arc-0072)).
### ARC 19 - Templating of NFT ASA URLs for mutability
[Section titled “ARC 19 - Templating of NFT ASA URLs for mutability”](#arc-19---templating-of-nft-asa-urls-for-mutability)
This ARC describes a template substitution for URLs in ASAs, initially for ipfs\:// scheme URLs allowing mutable CID replacement in rendered URLs. The proposed template-XXX scheme has substitutions like:
```plaintext
template-ipfs://{ipfscid::::}[/...]
```
This will allow modifying the 32-byte ‘Reserve address’ in an ASA to represent a new IPFS content-id hash. Changing of the reserve address via an asset-config transaction will be all that is needed to point an ASA URL to new IPFS content. The client reading this URL, will compose a fully formed IPFS Content-ID based on the version, multicodec, and hash arguments provided in the ipfscid substitution.
### ARC 20 - Smart ASA
[Section titled “ARC 20 - Smart ASA”](#arc-20---smart-asa)
A “Smart ASA” is an Algorand Standard Asset (ASA) controlled by a Smart Contract that exposes methods to create, configure, transfer, freeze, and destroy the asset. This ARC defines the ABI interface of such a Smart Contract, the required metadata, and suggests a reference implementation.
### ARC 36 - Convention for declaring filters of an NFT
[Section titled “ARC 36 - Convention for declaring filters of an NFT”](#arc-36---convention-for-declaring-filters-of-an-nft)
The goal is to establish a standard for how filters are declared inside a non-fungible (NFT) metadata.
### ARC 62 - ASA Circulating Supply
[Section titled “ARC 62 - ASA Circulating Supply”](#arc-62---asa-circulating-supply)
This ARC introduces a standard for the definition of circulating supply for Algorand Standard Assets (ASA) and its client-side retrieval. A reference implementation is suggested.
### ARC 69 - ASA Parameters Conventions, Digital Media
[Section titled “ARC 69 - ASA Parameters Conventions, Digital Media”](#arc-69---asa-parameters-conventions-digital-media)
The goal of these conventions is to make it simpler to display the properties of a given ASA. This ARC differs from [ARC-3](/arc-standards/arc-0003) by focusing on optimization for fetching of digital media, as well as the use of onchain metadata. Furthermore, since asset configuration transactions are used to store the metadata, this ARC can be applied to existing ASAs. While mutability helps with backwards compatibility and other use cases, like leveling up an RPG character, some use cases call for immutability. In these cases, the ASA manager MAY remove the manager address, after which point the Algorand network won’t allow anyone to send asset configuration transactions for the ASA. This effectively makes the latest valid [ARC-69](/arc-standards/arc-0069) metadata immutable.
### ARC 71 - Non-Transferable ASA
[Section titled “ARC 71 - Non-Transferable ASA”](#arc-71---non-transferable-asa)
The goal is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to identify & interact with a Non-transferable ASA (NTA). This defines an interface extending [ARC-3](/arc-standards/arc-0003) & [ARC-69](/arc-standards/arc-0069) non fungible ASA to create Non-transferable ASA. Before issuance, both parties (issuer and receiver), have to agree on who has (if any) the authorization to burn this ASA.
> This spec is compatible with [ARC-19](/arc-standards/arc-0019) to create an updatable Non-transferable ASA.
### ARC 89 - ASA Metadata Registry
[Section titled “ARC 89 - ASA Metadata Registry”](#arc-89---asa-metadata-registry)
This ARC defines the interface and the implementation of a singleton Application that provides Algorand Standard Assets metadata through the Algod API or the AVM.
## Application ARCs
[Section titled “Application ARCs”](#application-arcs)
### ARC 4 - Application Binary Interface (ABI)
[Section titled “ARC 4 - Application Binary Interface (ABI)”](#arc-4---application-binary-interface-abi)
This document introduces conventions for encoding method calls, including argument and return value encoding, in Algorand Application call transactions. The goal is to allow clients, such as wallets and dapp frontends, to properly encode call transactions based on a description of the interface. Further, explorers will be able to show details of these method invocations.
#### Definitions
[Section titled “Definitions”](#definitions)
* **Application:** an Algorand Application, aka “smart contract”, “stateful contract”, “contract”, or “app”.
* **HLL:** a higher level language that compiles to TEAL bytecode.
* **dapp (frontend)**: a decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with Applications on the blockchain.
* **wallet**: an off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts.
* **explorer**: an off-chain application that allows browsing the blockchain, showing details of transactions.
### ARC 18 - Royalty Enforcement Specification
[Section titled “ARC 18 - Royalty Enforcement Specification”](#arc-18---royalty-enforcement-specification)
A specification to describe a set of methods that offer an API to enforce Royalty Payments to a Royalty Receiver given a policy describing the royalty shares, both on primary and secondary sales. This is an implementation of an [ARC-20](/arc-standards/arc-0020) specification and other methods may be implemented in the same contract according to that specification.
### ARC 21 - Round based datafeed oracles on Algorand
[Section titled “ARC 21 - Round based datafeed oracles on Algorand”](#arc-21---round-based-datafeed-oracles-on-algorand)
The following document introduces conventions for building round based datafeed oracles on Algorand using the ABI defined in [ARC-4](/arc-standards/arc-0004)
### ARC 22 - Add `read-only` annotation to ABI methods
[Section titled “ARC 22 - Add read-only annotation to ABI methods”](#arc-22---add-read-only-annotation-to-abi-methods)
The goal of this convention is to allow smart contract developers to distinguish between methods which mutate state and methods which don’t by introducing a new property to the `Method` descriptor.
### ARC 23 - Sharing Application Information
[Section titled “ARC 23 - Sharing Application Information”](#arc-23---sharing-application-information)
The following document introduces a convention for appending information (stored in various files) to the compiled application’s bytes. The goal of this convention is to standardize the process of verifying and adding this information. The encoded information byte string is `arc23` followed by the IPFS CID v1 of a folder containing the files with the information. The minimum required file is `contract.json` representing the contract metadata (as described in [ARC-4](/arc-standards/arc-0004)), and as extended by future potential ARCs).
### ARC 28 - Algorand Event Log Spec
[Section titled “ARC 28 - Algorand Event Log Spec”](#arc-28---algorand-event-log-spec)
Algorand dapps can use the [`log`](https://developer.algorand.org/docs/get-details/dapps/avm/teal/opcodes/#log) primitive to attach information about an application call. This ARC proposes the concept of Events, which are merely a way in which data contained in these logs may be categorized and structured. In short: to emit an Event, a dapp calls `log` with ABI formatting of the log data, and a 4-byte prefix to indicate which Event it is.
### ARC 32 - Application Specification
[Section titled “ARC 32 - Application Specification”](#arc-32---application-specification)
> \[!NOTE] This specification will be eventually deprecated by the [`ARC-56`](https://github.com/algorandfoundation/ARCs/pull/258) specification. An Application is partially defined by it’s [methods](/arc-standards/arc-0004) but further information about the Application should be available. Other descriptive elements of an application may include it’s State Schema, the original TEAL source programs, default method arguments, and custom data types. This specification defines the descriptive elements of an Application that should be available to clients to provide useful information for an Application Client.
### ARC 54 - ASA Burning App
[Section titled “ARC 54 - ASA Burning App”](#arc-54---asa-burning-app)
This ARC provides TEAL which would deploy a application that can be used for burning Algorand Standard Assets. The goal is to have the apps deployed on the public networks using this TEAL to provide a standardized burn address and app ID.
### ARC 56 - Extended App Description
[Section titled “ARC 56 - Extended App Description”](#arc-56---extended-app-description)
This ARC takes the existing JSON description of a contract as described in [ARC-4](/arc-standards/arc-0004) and adds more fields for the purpose of client interaction
### ARC 72 - Algorand Smart Contract NFT Specification
[Section titled “ARC 72 - Algorand Smart Contract NFT Specification”](#arc-72---algorand-smart-contract-nft-specification)
This specifies an interface for non-fungible tokens (NFTs) to be implemented on Algorand as smart contracts. This interface defines a minimal interface for NFTs to be owned and traded, to be augmented by other standard interfaces and custom methods.
### ARC 73 - Algorand Interface Detection Spec
[Section titled “ARC 73 - Algorand Interface Detection Spec”](#arc-73---algorand-interface-detection-spec)
This ARC specifies an interface detection interface based on [ERC-165](https://eips.ethereum.org/EIPS/eip-165). This interface allows smart contracts and indexers to detect whether a smart contract implements a particular interface based on an interface selector.
### ARC 74 - NFT Indexer API
[Section titled “ARC 74 - NFT Indexer API”](#arc-74---nft-indexer-api)
This specifies a REST interface that can be implemented by indexing services to provide data about NFTs conforming to the [ARC-72](/arc-standards/arc-0072) standard.
### ARC 87 - Key Name Specification
[Section titled “ARC 87 - Key Name Specification”](#arc-87---key-name-specification)
Adopt a standard key name specification for complex data. This defines key names that can be used to represent JSON, Blobs, or other structures that do not fit neatly into the state
### ARC 200 - Algorand Smart Contract Token Specification
[Section titled “ARC 200 - Algorand Smart Contract Token Specification”](#arc-200---algorand-smart-contract-token-specification)
This ARC (Algorand Request for Comments) specifies an interface for tokens to be implemented on Algorand as smart contracts. The interface defines a minimal interface required for tokens to be held and transferred, with the potential for further augmentation through additional standard interfaces and custom methods.
## Explorer ARCs
[Section titled “Explorer ARCs”](#explorer-arcs)
### ARC 2 - Algorand Transaction Note Field Conventions
[Section titled “ARC 2 - Algorand Transaction Note Field Conventions”](#arc-2---algorand-transaction-note-field-conventions)
The goal of these conventions is to make it simpler for block explorers and indexers to parse the data in the note fields and filter transactions of certain dApps.
## Wallet ARCs
[Section titled “Wallet ARCs”](#wallet-arcs)
### ARC 1 - Algorand Wallet Transaction Signing API
[Section titled “ARC 1 - Algorand Wallet Transaction Signing API”](#arc-1---algorand-wallet-transaction-signing-api)
The goal of this API is to propose a standard way for a dApp to request the signature of a list of transactions to an Algorand wallet. This document also includes detailed security requirements to reduce the risks of users being tricked to sign dangerous transactions. As the Algorand blockchain adds new features, these requirements may change.
### ARC 5 - Wallet Transaction Signing API (Functional)
[Section titled “ARC 5 - Wallet Transaction Signing API (Functional)”](#arc-5---wallet-transaction-signing-api-functional)
ARC-1 defines a standard for signing transactions with security in mind. This proposal is a strict subset of ARC-1 that outlines only the minimum functionality required in order to be useable. Wallets that conform to ARC-1 already conform to this API. Wallets conforming to [ARC-5](/arc-standards/arc-0005) but not ARC-1 **MUST** only be used for testing purposes and **MUST NOT** used on MainNet. This is because this ARC-5 does not provide the same security guarantees as ARC-1 to protect properly wallet users.
### ARC 25 - Algorand WalletConnect v1 API
[Section titled “ARC 25 - Algorand WalletConnect v1 API”](#arc-25---algorand-walletconnect-v1-api)
WalletConnect is an open protocol to communicate securely between mobile wallets and decentralized applications (dApps) using QR code scanning (desktop) or deep linking (mobile). It’s main use case allows users to sign transactions on web apps using a mobile wallet. This document aims to establish a standard API for using the WalletConnect v1 protocol on Algorand, leveraging the existing transaction signing APIs defined in [ARC-1](/arc-standards/arc-0001).
### ARC 27 - Provider Message Schema
[Section titled “ARC 27 - Provider Message Schema”](#arc-27---provider-message-schema)
Building off of the work of the previous ARCs relating to; provider transaction signing (\[ARC-0005]\[arc-0005]), provider address discovery (\[ARC-0006]\[arc-0006]), provider transaction network posting (\[ARC-0007]\[arc-0007]) and provider transaction signing & posting (\[ARC-0008]\[arc-0008]), this proposal aims to comprehensively outline a common message schema between clients and providers. Furthermore, this proposal extends the aforementioned methods to encompass new functionality such as:
* Extending the message structure to target specific networks, thereby supporting multiple AVM (Algorand Virtual Machine) chains.
* Adding a new method that disables clients on providers.
* Adding a new method to discover provider capabilities, such as what networks and methods are supported. This proposal serves as a formalization of the message schema and leaves the implementation details to the prerogative of the clients and providers. \[Back to top ^]\[top]
### ARC 35 - Algorand Offline Wallet Backup Protocol
[Section titled “ARC 35 - Algorand Offline Wallet Backup Protocol”](#arc-35---algorand-offline-wallet-backup-protocol)
This document outlines the high-level requirements for a wallet-agnostic backup protocol that can be used across all wallets on the Algorand ecosystem.
### ARC 47 - Logic Signature Templates
[Section titled “ARC 47 - Logic Signature Templates”](#arc-47---logic-signature-templates)
This standard allows wallets to sign known logic signatures and clearly tell the user what they are signing.
### ARC 55 - On-Chain storage/transfer for Multisig
[Section titled “ARC 55 - On-Chain storage/transfer for Multisig”](#arc-55---on-chain-storagetransfer-for-multisig)
This ARC proposes the utilization of on-chain smart contracts to facilitate the storage and transfer of Algorand multisignature metadata, transactions, and corresponding signatures for the respective multisignature sub-accounts.
### ARC 59 - ASA Inbox Router
[Section titled “ARC 59 - ASA Inbox Router”](#arc-59---asa-inbox-router)
The goal of this standard is to establish a standard in the Algorand ecosystem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA. A wallet custodied by an application will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will be used to map inbox addresses to user address. This master application can route ASAs to users performing whatever actions are necessary. If integrated into ecosystem technologies including wallets, explorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received.
### ARC 60 - Algorand Wallet Arbitrary Signing API
[Section titled “ARC 60 - Algorand Wallet Arbitrary Signing API”](#arc-60---algorand-wallet-arbitrary-signing-api)
This ARC proposes a standard for arbitrary data signing. It is designed to be a simple and flexible standard that can be used in a wide variety of applications.
# Disclosure of Vulnerabilities in Puya Smart Contract Compiler
This disclosure report contains technical details of two vulnerabilities in the Puya smart contract compiler.
**Date reported:** October 10, 2025
**Affected Versions:**
* PuyaPy: Versions `<5.3.2` and `<4.11.0` for the 4.x major version
* Puya-TS: Versions `<1.0.0-alpha.96` or `<1.0.0-beta.74`
### Summary of Vulnerability
[Section titled “Summary of Vulnerability”](#summary-of-vulnerability)
Two separate vulnerabilities that could affect smart contracts were discovered in the Puya smart contract compiler:
1. **Missing Assert:** An optimization bug affecting the Puya compiler for Algorand Python & Algorand TypeScript in a narrow version window could remove a final assert before a return.
2. **ARC-4 Encoding Length Check:** A class of bugs where ARC-4 Application Binary Interface (ABI) values were not always validated by default, and this behavior was not clearly documented.
Smart contract developers should use the resources in the **Steps to Reproduce** section below to assess their smart contract code for potential vulnerabilities and take immediate corrective action if any are discovered.
### Impact
[Section titled “Impact”](#impact)
Any smart contract written in Algorand Python or Algorand TypeScript and compiled with a vulnerable version of the Puya compiler could potentially suffer from insecure TEAL code in certain scenarios. Smart contract developers should review their code following the guidance in the **Steps to Reproduce** section to assess if contracts were compiled with affected versions of Puya and, if so, review the code carefully to identify conditions for which the smart contract may have vulnerabilities in the compiled TEAL.
As of the publication date, no direct impacts have been reported from the ecosystem.
### Technical Details
[Section titled “Technical Details”](#technical-details)
#### Discovery
[Section titled “Discovery”](#discovery)
On October 10, 2025, the Algorand Foundation received a report that a smart contract compiled with Puya was not checking ABI method arguments in the compiled TEAL code. Upon investigation by the engineering team, this was confirmed to be true and the full extent of missing validations was determined. In this process, the second missing assert vulnerability was also identified.
Further investigation found that multiple other Algorand smart contract languages, such as PyTeal, TEALScript, and Tealish, also did partial or no validation of ABI values, with varying degrees of documentation about compiler behavior in this regard.
#### Root Cause
[Section titled “Root Cause”](#root-cause)
The “missing assert” bug was caused by a human error in coding a peephole optimization in the compiler, and the error was not caught by a second reviewer.
The ARC-4 encoding length check vulnerability can be traced to insufficient documentation of the lack of validation, which was the Puya compiler’s default behavior by design.
#### Remediation
[Section titled “Remediation”](#remediation)
Going forward, Puya’s design will be secure-by-default, and security recommendations in the specs will be normative. Automatic validation will be applied during the compilation process unless the developer explicitly chooses to disable this behavior with a compiler flag. Additionally, an enhancement enables developers to apply ABI validations selectively to individual methods by using a new decorator.
### Strategic Mitigation Initiatives
[Section titled “Strategic Mitigation Initiatives”](#strategic-mitigation-initiatives)
The Algorand Foundation engineering team has implemented multiple strategic prevention measures to prevent future issues. These include strengthening regression tests for the Puya compiler, implementing clearer warnings when automatic validation is disabled, and improving release processes to require additional reviewers through standard operating procedures and automated CI/CD controls.
### Steps to Reproduce
[Section titled “Steps to Reproduce”](#steps-to-reproduce)
Two detailed guides for understanding each type of vulnerability and assessing if it may affect contracts compiled with affected version of Puya have been published on GitHub:
* [Missing Assert guide](https://github.com/algorandfoundation/puya/blob/main/security/002-missing-assert.md)
* [ARC-4 Encoding Length Check guide](https://github.com/algorandfoundation/puya/blob/main/security/001-arc4-encoding.md)
### Fixes / Patches Available
[Section titled “Fixes / Patches Available”](#fixes--patches-available)
The fix for both issues is available in the following package versions: PuyaPy: Versions `≥5.3.2` or `≥4.11.0` for the 4.x major version Puya-TS: Versions `≥1.0.0-alpha.96` or `≥1.0.0-beta.74`
Upgrade Puya, recompile all contracts, and verify the ARC-56 JSON shows Puya `≥4.11.0` or `≥5.3.2`. Developers are strongly encouraged to create tests to verify that oversized inputs are rejected and that any missing asserts are working.
All projects are advised to avoid older versions:
* PuyaPy: Versions `<5.3.2` and `<4.11.0` for the 4.x major version
* Puya-TS: Versions `<1.0.0-alpha.96` or `<1.0.0-beta.74`
### Additional Information
[Section titled “Additional Information”](#additional-information)
The ARC-4 Encoding Length Check vulnerability can also affect other high-level smart contract languages: PyTeal does not perform validation by default. Apply the recommendations for manual validation found in the PyTeal Documentation. TEALScript does not perform validation automatically for dynamic tuples or ABI return values, but does for static method arguments. Tealish supports fixed-size structs, but the compiler does not check them automatically. This behavior, however, is documented in the language guide.
Developers should also review any smart contracts written directly in TEAL to ensure the appropriate checks are performed around ABI values.
### Acknowledgements
[Section titled “Acknowledgements”](#acknowledgements)
Thanks to Folks Finance for discovering the vulnerabilities and reporting them responsibly.
Additionally, thanks to the Algorand Foundation Engineering team and MakerX for their swift and thorough response to the issues and assistance in reviewing smart contracts for various applications.
And, as always, thanks to the global Algorand community of validators, developers, and contributors who keep the network running, safe, and secure.
### References
[Section titled “References”](#references)
The GitHub repositories for Puya can be found at:
* Puya compiler back end and Algorand Python front end:
* Puya-TS front end for Algorand TypeScript: