Blockchain
Ethereum’s Vitalik Buterin Supports TiTok as a Blockchain App
According to Ethereum (ET) co-founder Vitalik Buterin, the new image compression method Token for Image Tokenizer (TiTok AI) can encode images in a size large enough to add onchain.
On his social media Warpcast account, Buterin called the image compression method a new way of “encoding a profile picture.” He went on to say that if he were able to compress an image to 320 bits, what he called “basically a hash,” it would make the images small enough to be daisy-chained for each user.
The Ethereum co-founder became interested in TiTok AI from an X post made by a researcher at artificial intelligence (AI) imaging platform Leonardo AI.
The researcher, under the name @Ethan_smith_20, briefly explained how the method could help those interested in reinterpreting high-frequency details within images to successfully encode complex images into 32 tokens.
Buterin’s perspective suggests that the method could make it much easier for developers and creators to create profile pictures and non-fungible tokens (NFT).
Fixed previous image tokenization issues
TiTok AI, developed by a collaborative effort between TikTok parent company ByteDance and the University of Munich, is described as an innovative one-dimensional tokenization framework, significantly diverging from the prevailing two-dimensional methods in use.
According to a research paper on the image tokenization method, artificial intelligence allows TiTok to compress 256 x 256 pixel rendered images into “32 distinct tokens.”
The paper highlighted problems encountered with previous image tokenization methods, such as VQGAN. Previously, image tokenization was possible, but strategies were limited to using “2D latent grids with fixed downsampling factors.”
2D tokenization could not circumvent the difficulties in managing redundancies found within images, and neighboring regions showed many similarities.
TiTok, which uses TO THEpromises to solve this problem, using technologies that effectively tokenize images into 1D latent sequences to provide a “compact latent representation” and eliminate region redundancy.
Furthermore, the tokenization strategy could help simplify image storage on blockchain platforms, while offering notable improvements in processing speed.
Furthermore, it boasts speeds up to 410 times faster than current technologies, which represents a huge leap forward in terms of computational efficiency.