日韩午夜精品视频,欧美私密网站,国产一区二区三区四区,国产主播一区二区三区四区

share
 

Amazon announces multimodal AI models Nova

0 Comment(s)Print E-mail Xinhua, December 4, 2024
Adjust font size:

Amazon Web Services (AWS), Amazon's cloud computing division, announced Tuesday a new family of generative AI, multimodal models called Nova at its re:Invent conference.

There are four text-focused models in total: Micro, Lite, Pro, and Premier. The first three are available for AWS customers on Tuesday, while Premiere will launch in early 2025.

"We've continued to work on our own frontier models," Amazon CEO Andy Jassy said, "and those frontier models have made a tremendous amount of progress over the last four to five months."

The text-focused Nova models, which are optimized for 15 languages, are mainly differentiated by their capabilities and sizes.

Micro can only take in text and output text, and delivers the lowest latency of the bunch -- processing text and generating answers the fastest. Lite can process image, video, and text inputs reasonably quickly. Pro offers the best combination of accuracy, speed, and cost for various tasks. And Premier is the most capable, designed for complex workloads, according to AWS.

Micro has a 128,000-token context window, which can process up to around 100,000 words. Lite and Pro have 300,000-token context windows, which works out to around 225,000 words, 15,000 lines of computer code, or 30 minutes of video, it said.

In early 2025, certain Nova models' context windows will expand to support over 2 million tokens, AWS said.

"We've optimized these models to work with proprietary systems and APIs, so that you can do multiple orchestrated automatic steps -- agent behavior -- much more easily with these models," Jassy said.

In addition, there's an image-generation model, Nova Canas, and a video-generating model, Nova Reel. Both have launched on AWS.

Jassy said AWS is also working on a speech-to-speech model for the first quarter of 2025, and an "any-to-any" model for around mid-2025. "You'll be able to input tech, images, or video and output text, speech, images, or video," Jassy said of the any-to-any model.

Follow China.org.cn on Twitter and Facebook to join the conversation.
ChinaNews App Download
Print E-mail Bookmark and Share

Go to Forum >>0 Comment(s)

No comments.

Add your comments...

  • User Name Required
  • Your Comment
  • Enter the words you see:   
    Racist, abusive and off-topic comments may be removed by the moderator.
Send your storiesGet more from China.org.cnMobileRSSNewsletter
主站蜘蛛池模板: 资阳市| 三江| 乳源| 定西市| 霍州市| 镇原县| 阜宁县| 博乐市| 新巴尔虎右旗| 临泉县| 仁寿县| 屯门区| 福贡县| 隆德县| 鄂托克旗| 雷波县| 赞皇县| 宣汉县| 郴州市| 邹城市| 大渡口区| 师宗县| 宜昌市| 抚远县| 芮城县| 疏附县| 紫阳县| 井研县| 资源县| 玉屏| 加查县| 海安县| 温宿县| 巴林右旗| 横山县| 临颍县| 株洲市| 白朗县| 英吉沙县| 苗栗县| 女性|