Scan to download
BTC $76,158.16 -1.21%
ETH $2,356.42 -2.98%
BNB $633.82 -1.22%
XRP $1.44 -3.52%
SOL $86.66 -3.48%
TRX $0.3287 +1.21%
DOGE $0.0959 -4.98%
ADA $0.2510 -5.39%
BCH $446.40 -2.40%
LINK $9.38 -4.23%
HYPE $44.19 -0.93%
AAVE $111.75 -4.17%
SUI $0.9669 -6.86%
XLM $0.1701 -1.95%
ZEC $324.83 -7.82%
BTC $76,158.16 -1.21%
ETH $2,356.42 -2.98%
BNB $633.82 -1.22%
XRP $1.44 -3.52%
SOL $86.66 -3.48%
TRX $0.3287 +1.21%
DOGE $0.0959 -4.98%
ADA $0.2510 -5.39%
BCH $446.40 -2.40%
LINK $9.38 -4.23%
HYPE $44.19 -0.93%
AAVE $111.75 -4.17%
SUI $0.9669 -6.86%
XLM $0.1701 -1.95%
ZEC $324.83 -7.82%

How does NEAR ride the wave of AI?

Summary: Isn't NEAR an all-in chain abstraction?
Haotian
2024-03-13 11:42:11
Collection
Isn't NEAR an all-in chain abstraction?

Written by: Haotian

Recently, the news that NEAR founder @ilblackdragon will appear at the NVIDIA AI conference has drawn significant attention to the NEAR public chain, and the market price trend is also encouraging. Many friends are puzzled: isn't the NEAR chain all in on chain abstraction? How did it suddenly become a leading AI public chain? Next, I will share my observations and also provide some knowledge about AI model training:

1) NEAR founder Illia Polosukhin has a long background in AI and is a co-builder of the Transformer architecture. The Transformer architecture is the foundational structure for training large language models (LLMs) like ChatGPT, which sufficiently proves that the NEAR boss indeed has experience in creating and leading AI large model systems before founding NEAR.

2) NRAR launched NEAR Tasks at NEARCON 2023, aiming to train and improve artificial intelligence models. Simply put, model training demanders (Vendors) can publish task requests on the platform and upload basic data materials, while users (Taskers) can participate in answering tasks, performing manual operations such as text annotation and image recognition. After completing the tasks, the platform rewards users with NEAR tokens, and the manually annotated data will be used to train the corresponding AI models.

For example: if an AI model needs to improve its ability to recognize objects in images, the Vendor can upload a large number of original images containing different objects to the Tasks platform. Then, users can manually annotate the locations of the objects in the images, generating a large amount of "image - object location" data, which the AI can use to learn independently to enhance its image recognition capabilities.

At first glance, NEAR Tasks seems to be a socialized manual engineering service for AI models. Is it really that important? Here, I would like to add some knowledge about AI models.

Typically, a complete AI model training process includes data collection, data preprocessing and annotation, model design and training, model tuning, fine-tuning, model validation testing, model deployment, model monitoring and updating, etc. Among these, data annotation and preprocessing are the manual parts, while model training and optimization are the machine parts.

Clearly, the machine part is generally understood to be significantly larger than the manual part, as it appears more high-tech. However, in reality, manual annotation is crucial in the entire model training process.

Manual annotation can add labels to objects (people, places, things) in images to help computers improve visual model learning; it can also convert spoken content into text and label specific syllables, words, and phrases to assist in training speech recognition models; manual annotation can also add emotional labels such as happiness, sadness, and anger to text, enhancing the AI's emotional analysis skills, etc.

It is not difficult to see that manual annotation is the foundation for machines to conduct deep learning models. Without high-quality annotated data, models cannot learn efficiently, and if the volume of annotated data is not large enough, model performance will also be limited.

Currently, there are many vertical directions in the AI micro-creation field that involve secondary fine-tuning or specialized training based on the ChatGPT large model. Essentially, these are adding new data sources, especially manually annotated data, on top of OpenAI's data foundation for model training.

For instance, if a medical company wants to train a model based on medical imaging AI to provide an online AI consultation service for hospitals, it only needs to upload a large amount of raw medical imaging data to the Task platform and then let users annotate and complete tasks, generating manually annotated data. This data can then be used to fine-tune and optimize the ChatGPT large model, transforming this general AI tool into an expert in a vertical field.

However, NEAR's ambition to become a leading AI public chain solely through the Tasks platform is clearly not enough. NEAR is also providing AI Agent services within its ecosystem to automatically execute all on-chain behaviors and operations for users, allowing users to freely buy and sell assets in the market with just authorization. This is somewhat similar to Intent-centric, using AI automation to enhance the user experience in on-chain interactions. Additionally, NEAR's powerful DA capabilities can play a role in the traceability of AI data sources, tracking the validity and authenticity of AI model training data.

In summary, backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the AI direction seem to be much more impressive than pure chain abstraction.

A month and a half ago, when I was analyzing NRAR chain abstraction, I saw the advantages of NEAR's chain performance combined with the team's strong web2 resource integration capabilities. I never expected that before chain abstraction became widespread, this wave of AI empowerment would once again amplify imagination.

Note: Long-term attention should still focus on NEAR's layout and product advancement in "chain abstraction." AI will be a good bonus and a catalyst for the bull market! #NEAR

warnning Risk warning
app_icon
ChainCatcher Building the Web3 world with innovations.