Claude, an adversary of Anthropic’s ChatGPT, is now capable of analyzing 150 000 words per prompt.

Claude, an adversary of Anthropic’s ChatGPT, is now capable of analyzing 150 000 words per prompt.

2 minutes, 31 seconds Read

Will Shanklin

Today, Claude 2.1 was introduced by OpenAI rival Anthropic. You can paste the entire text of Homer’s The Odyssey for AI analysis in the most recent version of the ChatGPT rival, which increases its context window to 200, 000 tokens. A context window is the predetermined number of tokens it can parse in a single request.Tokens are text chunks used to organize information. Version 2.1, according to the company, also lowers Claude’s hallucination rate and produces fewer false responses ( like those the ChatGPT attorney overestimated ). Whether intentionally or not, the update comes as Anthropic’s rival OpenAI is descending into chaos.

Users can upload entire codebases, academic papers, financial statements, or lengthy literary works using Claude 2.1’s 200K token context window, according to the company. ( Anthropic claims that 200 000 tokens equate to more than 500 pages of content or 150 000 words. ) The chatbot can provide summaries, respond to specific questions about the content, compare and contrast various documents, or spot patterns that may be more difficult for humans to see after uploading the material.

In an announcement blog post, the company stated that “processing a 200K length message is complex feat and an industry first.” While we’re excited to give our users access to this potent new capability, Claude might need a few minutes to complete tasks that normally take hours of human effort. As technology develops, we anticipate a significant decrease in latency.

Anthropic issues a warning that it may take the AI bot several minutes to analyze and respond to very long inputs, which is much longer than the average amount of time it takes us to wait for simpler questions. As technology develops, we anticipate a significant decrease in latency, the company stated.

See also  Although Ghostrunner 2 punishes harshly, it's too good to stop playing.

This generation of AI chatbots is still rife with hallucinations or confidently false information. Anthropic claims that Claude 2.1 has reduced the number of hallucinations by half compared to the latter. Claude 2.1 is about twice as likely to admit it does n’t know an answer as it is to provide the wrong one, according to the company, which explains some of the progress by improving the ability to distinguish between incorrect claims and admissions of uncertainty.

According to Anthropic, Claude 2. 1 also makes 30 % fewer mistakes in lengthy documents. Additionally, when using more robust context windows, it has a three to four times lower rate of “mistakenly concluding” that the document supports the claim.

A few benefits are also added by the updated bot specifically for developers. Devs can “refine prompts in a playground-style experience and access new model settings to optimize Claude’s behavior” with the new Workbench console. Users can test various prompts and create SDK snippets by using Claude’s codebase, for instance. Claude can “integrate with users ‘ existing processes, products, and APIs” thanks to” tool use,” a new developer beta feature. The business uses examples such as connecting to product datasets, using a web search API, translating plain language into structured API calls, and tapping into clients ‘ private APIs. The business issues a warning about the tool use feature’s early development and requests feedback from customers.

Similar Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments