Smarter, Faster, Cheaper: How Sonnet 3.5 Is Now The Best AI Model [Business Prompts Included]

Plus: Overcoming the challenges of data-driven decisions with AI

In today’s edition…

Today, we have a special treat for you. We’re focusing on practical applications, seasoned by the research that made them possible.

Dive in, and make sure to integrate at least one of these practical use cases into your internal workflows by the end of today.

Join 9,000+ founders getting actionable golden nuggets that are tailored to make your business more profitable.


Top Performance and They’re Not Even Done Yet…

Claude 3.5 Sonnet was released and impressions so far have been nothing but amazement.

It’s level of reasoning output has reached levels where it’s so contextually aware that some voices on the internet are saying that it has already passed the mirror test — this could be debated, but the fact that this conversation is now more present than ever is already a huge factor to consider.

Previous models are already trained on almost all the data available on the internet. So how is it possible to keep improving them?

Synthetic data is the new frontier, and architectural tweaks to enhance model quality, address the limits of training on existing internet data.


As a mid-tier model, it already outperforms competitor models like GPT-4o on key evaluations, at twice the speed of Claude 3 Opus and one-fifth the cost.

It is the new model eval leader, according to Hume AI and multiple other benchmarks external from Anthropic’s themselves.

What’s even more mind-blowing: this is not even the most intelligent model of Claude, and it’s already beating every other LLM out there.


Shareable Projects:

You can now organize chats with Claude into shareable Projects that are available for Pro users and Team users.

Claude Projects allow users to create custom workspaces similar to custom GPTs in ChatGPT, where multiple files can be uploaded for context, and each chat within the project references this information.

For instance, a project named “Research Papers” can be created to analyze and respond to questions about uploaded PDFs of research papers.

Each project includes a 200K context window, so you can include relevant documents, code, and files.

You can also set custom instructions within each project to further tailor Claude's responses and switch back to the project from the homepage to ensure all prompts utilize the uploaded knowledge.


The artifacts window comes with a built-in code visualizer that’s going to be crucial for generating web applications, presentations, designs, tables, and code in a separate window alongside the chat.


Creating Prompt Templates:

After completing a project, if you’re happy with the output, ask Claude to generate a reusable prompt template. This template can be used for similar projects by simply uploading new data and achieving consistency while saving lots of time.

Let’s say you just tried generating prompts for interactive posters, where you also created the interface in dark mode, expandable sections, dynamic data visualization, and clickable elements

Give me a reusable prompt template that I can use to generate similar projects every time. For example, after uploading a PDF, how should I ask you to create an interactive research poster with unique visuals and data visualization, dark mode, and specific sections with clickable elements? Please provide the exact wording of the prompt that I should use.

What’s so cool about this is that you can star chats and add them to your starred section at the left-side panel, so you’ll have fast and easy access to them.

Data Handling:

If you want to make the most of Sonnet 3.5 when it comes to its ability to handle complex data analysis, convert your CSV files to JSON format. This ensures more efficient processing and visualization.

Business Use Cases to Apply Today:

1. Web Applications and UI Designs:

You can take screenshots of websites (e.g., Spotify or YouTube homepage) and get a very close UI match in the first shot.

This is the prompt you can use:

Create a web application based on this screenshot. Use the visual design and layout from the screenshot to generate the HTML, CSS, and JavaScript necessary to replicate the website.

2. Interactive Dashboards:

You can transform PDFs into interactive dashboards with tabs and quizzes, to visualize and manipulate data much more easily. As you prompt through its creation, iteratively add new features that will make them more and more powerful. Here are some ideas:

Check the Business Edge section for specific prompts on Product Inventory Management, Sales Performance Visualization, Customer Engagement and Analytics Tools, and Employee Productivity and Performance Tracking.

3. System Architecture:

Create system architectures for web applications or other complex systems. This involves generating detailed flowcharts and diagrams that represent the structure and components of the system.

SEO Tool System Architecture:

You can create an SEO web application that allows you to upload blog posts, get keyword recommendations, receive feedback scores, and view ranking criteria.

The prompt would go like this:

Create a system architecture for an SEO web application that allows users to upload blog posts, get keyword recommendations, feedback scores, and ranking criteria. Use mermaid to generate the diagram.

Claude generates a detailed system architecture diagram that visually breaks down the components of the SEO tool. The diagram includes databases (user, blog post, keyword), external services (SEO ranking API), backend processes, and frontend interfaces.

The use of Mermaid is for creating visually appealing and interactive architecture diagrams.

Check the Business Edge section for specific prompts on E-Commerce Platform, CRM System, HRMS, Financial Management System, Content Management System, and Incident Management System.

4. Computer Vision and Machine Learning/AI Applications:
Real-time object detection:

You can create a real-time object detection system using TensorFlow.js. The application uses the webcam to detect and label objects in real-time.

The prompt would look like this:

Create a single HTML file for a real-time object detection web application using TensorFlow.js and the COCO SSD model. The application should detect objects in real-time using the user's webcam. Provide the complete runnable HTML file with inline CSS and JavaScript.

We use Tensorflow.js and coco-ssd for the object recognition model. You can check out the full repo here.

It’s about time that we start to see real-time object or facial detection AR applications, personalized recommendation engines, predictive maintenance systems and much more applications alike that are built using LLMs, in a fraction of the time it used to take.


You cannot yet generate images like GPT-4o or other LLMs within Claude. What you can do as an alternative, is to generate images in SVG format that use code for their processing.


How a “mid-tier” model can beat every other LLM out there.

What has so many with their jaws open about Sonnet 3.5 is that being as good as it is, it’s not even the best LL Model by Anthropic.

And the reason why we believe it’s incredibly good at coding and other tasks because Anthropic figured out a way to understand and control its inner workings, much like doing brain surgery on an AI.

By using a special technique called sparse autoencoders (SAEs), they can identify and adjust specific features within the model’s millions of tiny parts, like tweaking switches in a complex machine.

We introduced this topic about the “AI Blackbox” and why it was going to be so revolutionary here.

This allows them to enhance desirable behaviors, such as writing accurate code, without the need for long and costly retraining processes.

Training an SAE means teaching it to recognize these features. It’s a bit like teaching someone to read by showing them lots of different books. 

Steering the Model: Like Brain Surgery

With this understanding, you can then “steer” the model’s behavior.

  • Want it to avoid unsafe topics? Adjust (or clamp) the unsafe feature.

  • Want it to reduce bias? Clamp the bias feature.

  • Want it to write better code? Enhance the features related to correct coding.

This approach is revolutionary because it makes changing the AI’s behavior much faster and cheaper. Instead of retraining the entire model from scratch, they can now make precise adjustments to steer the model towards better performance.

Anthropic's innovation doesn't stop there.

They have also explored how certain features within the model are linked to specific tasks through a process known as monosemanticity. This means that some parts of the model are dedicated to understanding particular concepts or tasks. For instance, there could be a specific feature that is solely responsible for understanding programming languages.

Identifying and manipulating these features, allows them to fine-tune the model even further, making it more efficient and targeted in its responses.

Finally, increased performance is possible because scaling has also become more efficient. As models become larger and more complex, the ability to control and understand them using these methods improves.

This scaling monosemanticity indicates that with bigger models, the control Anthropic can exert becomes more precise.

Check the full papers by Anthropic that suppose today one of the leading players of the industry when it comes not only to model performance but in research as well.

Overcoming the Challenges of Data-Driven Decisions With AI

Transforming raw data into actionable insights. This is one of the most common and constant challenges companies face.

Even with AI, many businesses are not able to harness its full potential and leverage all the data they generate and receive.

Yet if done well, embedding AI at every stage of the data lifecycle, will help you accelerate this process significantly, reducing the time needed from weeks to mere hours or minutes.

Let’s explore some of these challenges and potential solutions.

The Power of Modular and Composable AI Solutions

Adopting a modular and composable approach to AI technology can significantly benefit businesses.

Just as building blocks can be connected to create various structures, AI solutions can be designed to be reusable and interconnected to tackle complex business problems.

This modular approach offers scalability, flexibility, and faster implementation, allowing your business to quickly adapt to changing needs.

Thinking in terms of building blocks, just like Legos stack one on top of the other, organizations can ensure each component is optimized and easily integrated with others.

Let Data Engineers Become Knowledge Engineers

Using AI as leverage for the processing of your data have many implications.

One that Prinkal Pal from Lego AI share and really struck to us role of data engineers.

Their role will evolve in a way that enhances the value their organizations, making them key contributors to business success.

They bring domain expertise and business understanding to the table, validating outputs and ensuring that the insights generated by the AI align with business objectives.

Get Rid of The Bottleneck; Accelerate the Data Lifecycle

The slow process of transforming raw data into actionable insights can be a major bottleneck.

Traditional data processing methods often involve fragmented and inefficient ETL (Extract, Transform, Load) processes.

The goal is to automate many of these tasks with AI to drastically cut down the time required, for faster and more reliable decision-making.

Key Data-Driven Activities AI Automates

These are some of the most relevant benefits you can access by leveraging AI and plugging it into your processes:

Solving the Data Nomenclature Problem

As your business scales and grows, it’s common that what one team calls “customer” another might call “client” or “lead.”

This leads to massive confusion and inefficiencies that you must avoid at all costs sooner than later.

With AI, a common language interface would be created so your team can ask questions in natural language, and it would instantly translates these into analytical queries.

What used to take weeks to your data engineers, now takes days or even less.

Eradicating Code Spaghetti in Data Processing

If AI is not plugged in into your data pipelines, the processing methods often lead to and entangled thing known as “code spaghetti.” These pipelines are difficult to manage and prone to errors.

You can automating data connections and eliminate the need for pre-built data pipelines, simplifying data management by orders of magnitude.

Simplifying Business Reviews and Strategy Sessions

Managers are usually running behind the clock and under tremendous pressure to justify their strategies and performance to stakeholders.

An AI assistant can quickly generate diagnostic summaries and insights. For example, a marketing manager can ask, “Why are our sales declining?” and receive a comprehensive analysis that considers various data sources and competitive intelligence.

This is a call to action for you to adopt modular AI solutions, automate data processes, and create a more agile and responsive data ecosystem.

Check out our full podcast episode where we expand on how Lego AI these AI solutions are possible. We also touch on broad AI concepts like solving real problems with AI, rather than simply following hype, and some cool predictions in the AI realm throughout 2024.


Applications That Will Elevate Your Workflows Today


Let’s get those creative juices flowing and brainstorm some ideas of applications you can try for optimizing workflows within your business.

Interactive Dashboards:
  1. Customer Engagement and Analytics Tools: Develop tools for analyzing customer behavior and engagement, enabling businesses to track customer journeys and optimize marketing strategies.


Login or Subscribe to get access to all the System Architecture and Interactive Dashboards use cases with provided prompts we curated for this edition.

Continue reading today’s edition and get full access to our AI Intelligence platform, getting access to:

✔️ The full AI Deep Dives library

✔️ Full AI Webinar Workshops library

✔️ Premium Podcast Exclusive Guides

✔️ And much more…

Thanks for reading and until next one!

Continue Reading

How was today's Future Friday, cavebros and cavebabes?

Login or Subscribe to participate in polls.

We appreciate all of your votes. We would love to read your comments as well! Don't be shy, give us your thoughts, we promise we won't hunt you down. 😉


or to participate.