• FREE COURSE
    LLM Evaluation with Opik
    Learn to test and evaluate your LLM applications using the latest tools and techniques, including LLM-as-a-judge metrics and production LLM monitoring.

    https://www.comet.com/site/llm-course/
    FREE COURSE LLM Evaluation with Opik Learn to test and evaluate your LLM applications using the latest tools and techniques, including LLM-as-a-judge metrics and production LLM monitoring. https://www.comet.com/site/llm-course/
    WWW.COMET.COM
    LLM Course
    Register for this course and learn to build modern software with LLMs using the newest tools and techniques in the field.
    ·158 Views ·0 Reviews
  • This is the only course completely focused on applying state-of-the-art LLM evaluation techniques to real world applications. We will cover some theory, but this is first and foremost a course in applied AI, not mathematics.
    https://www.comet.com/site/llm-course/
    #AI #LLM #MachineLearning #AppliedAI #ArtificialIntelligence #AIApplications #TechInnovation #AITraining #LLMEvaluation #AITheory
    This is the only course completely focused on applying state-of-the-art LLM evaluation techniques to real world applications. We will cover some theory, but this is first and foremost a course in applied AI, not mathematics. https://www.comet.com/site/llm-course/ #AI #LLM #MachineLearning #AppliedAI #ArtificialIntelligence #AIApplications #TechInnovation #AITraining #LLMEvaluation #AITheory
    WWW.COMET.COM
    LLM Course
    Register for this course and learn to build modern software with LLMs using the newest tools and techniques in the field.
    ·547 Views ·0 Reviews
  • https://www.fine.dev/
    Fine | AI Coding Tool for Startups | AI Developer Agents
    Fine is an AI Coding agent for software developers and programmers at startups to use LLMs to write code and complete dev tasks. Sign up to Fine, the AI coding tool for startups.
    https://www.fine.dev/ Fine | AI Coding Tool for Startups | AI Developer Agents Fine is an AI Coding agent for software developers and programmers at startups to use LLMs to write code and complete dev tasks. Sign up to Fine, the AI coding tool for startups.
    ·340 Views ·0 Reviews
  • https://openrouter.ai/
    OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework.

    #openrouter #unifiedllms #llmsaggregator #huggingface
    https://openrouter.ai/ OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework. #openrouter #unifiedllms #llmsaggregator #huggingface
    OPENROUTER.AI
    OpenRouter
    A unified interface for LLMs. Find the best models & prices for your prompts
    ·1K Views ·0 Reviews
  • https://stackblitz-labs.github.io/bolt.diy/#setup
    bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.

    #bolt #bolt_diy #opensource #openai #claude #groq #deepseek #vercel #v0
    https://stackblitz-labs.github.io/bolt.diy/#setup bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models. #bolt #bolt_diy #opensource #openai #claude #groq #deepseek #vercel #v0
    ·1K Views ·0 Reviews
  • https://github.com/stackblitz-labs/bolt.diy?tab=readme-ov-file
    Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
    https://github.com/stackblitz-labs/bolt.diy?tab=readme-ov-file Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
    GITHUB.COM
    GitHub - stackblitz-labs/bolt.diy: Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
    Prompt, run, edit, and deploy full-stack web applications using any LLM you want! - stackblitz-labs/bolt.diy
    ·1K Views ·0 Reviews
  • https://www.tiktok.com/@theaiconsultinglab/video/7472894662848662830
    https://console.anthropic.com/dashboard
    https://platform.openai.com/playground/chat?models=gpt-4o

    Did you know that both Anthropic and OpenAI have prompt generator tools that can save you probably 90-95% of time writing prompts for LLM's?

    *Disclaimer* you will need to create an account on their developer platform and fund it with $5, but there is no ongoing fee to access this platform, and the $5 will go a LONG way for creating prompts.

    Prompt engineering is important, there's no doubt about that, but the model companies are making more progress towards a world where it becomes less important. With every new model that comes out, the reliance on well structured and engineered prompts becomes less important because the models get better at understanding user intents. On top of that, we could start seeing some of these companies implement a type of workflow similar to this prompt generator tool, which translates your requests into well structured prompts automatically.

    You will still need to be able to diagnose and troubleshoot your prompts if they don't produce the desired result, but using these tools can save you a massive amount of time. For most of my use cases, I will typically use the Anthropic console to write my prompts, and it requires very little editing.

    Have you tried these tools out yet, if so, what are your impressions of the quality of the prompts they produce?

    #ai #chatgpt #aiconsulting #llm #deepseek #claude #openai
    https://www.tiktok.com/@theaiconsultinglab/video/7472894662848662830 https://console.anthropic.com/dashboard https://platform.openai.com/playground/chat?models=gpt-4o Did you know that both Anthropic and OpenAI have prompt generator tools that can save you probably 90-95% of time writing prompts for LLM's? *Disclaimer* you will need to create an account on their developer platform and fund it with $5, but there is no ongoing fee to access this platform, and the $5 will go a LONG way for creating prompts. Prompt engineering is important, there's no doubt about that, but the model companies are making more progress towards a world where it becomes less important. With every new model that comes out, the reliance on well structured and engineered prompts becomes less important because the models get better at understanding user intents. On top of that, we could start seeing some of these companies implement a type of workflow similar to this prompt generator tool, which translates your requests into well structured prompts automatically. You will still need to be able to diagnose and troubleshoot your prompts if they don't produce the desired result, but using these tools can save you a massive amount of time. For most of my use cases, I will typically use the Anthropic console to write my prompts, and it requires very little editing. Have you tried these tools out yet, if so, what are your impressions of the quality of the prompts they produce? #ai #chatgpt #aiconsulting #llm #deepseek #claude #openai
    @theaiconsultinglab

    Did you know that both Anthropic and OpenAI have prompt generator tools that can save you probably 90-95% of time writing prompts for LLM's? *Disclaimer* you will need to create an account on their developer platform and fund it with $5, but there is no ongoing fee to access this platform, and the $5 will go a LONG way for creating prompts. Prompt engineering is important, there's no doubt about that, but the model companies are making more progress towards a world where it becomes less important. With every new model that comes out, the reliance on well structured and engineered prompts becomes less important because the models get better at understanding user intents. On top of that, we could start seeing some of these companies implement a type of workflow similar to this prompt generator tool, which translates your requests into well structured prompts automatically. You will still need to be able to diagnose and troubleshoot your prompts if they don't produce the desired result, but using these tools can save you a massive amount of time. For most of my use cases, I will typically use the Anthropic console to write my prompts, and it requires very little editing. Have you tried these tools out yet, if so, what are your impressions of the quality of the prompts they produce? #ai #chatgpt #aiconsulting #llm #deepseek #claude #openai

    ♬ original sound - The AI Consulting Lab
    ·620 Views ·0 Reviews
  • https://www.promptingguide.ai/agents/introduction

    In this guide, we refer to an agent as an LLM-powered system designed to take actions and solve complex tasks autonomously. Unlike traditional LLMs, AI agents go beyond simple text generation. They are equipped with additional capabilities, including:

    Planning and reflection:
    - AI agents can analyze a problem, break it down into steps, and adjust their approach based on new information.
    - Tool access: They can interact with external tools and resources, such as databases, APIs, and software applications, to gather information and execute actions.
    - Memory: AI agents can store and retrieve information, allowing them to learn from past experiences and make more informed decisions.

    This lecture discusses the concept of AI agents and their significance in the realm of artificial intelligence.
    https://www.promptingguide.ai/agents/introduction In this guide, we refer to an agent as an LLM-powered system designed to take actions and solve complex tasks autonomously. Unlike traditional LLMs, AI agents go beyond simple text generation. They are equipped with additional capabilities, including: Planning and reflection: - AI agents can analyze a problem, break it down into steps, and adjust their approach based on new information. - Tool access: They can interact with external tools and resources, such as databases, APIs, and software applications, to gather information and execute actions. - Memory: AI agents can store and retrieve information, allowing them to learn from past experiences and make more informed decisions. This lecture discusses the concept of AI agents and their significance in the realm of artificial intelligence.
    ·958 Views ·0 Reviews
  • https://youtu.be/Iabue7wtE4g?si=NlvvXGn80ZN80MxE
    In this video, I go through hands-on how to use the Anthropic computer use models and tools. Explain how they work and also show how you can get it started with Docker on your own computer.

    For more tutorials on using LLMs and building agents, check out my Patreon
    Patreon: / samwitteveen
    Twitter: / sam_witteveen

    Computer Use: https://www.anthropic.com/news/develo...
    Computer Use Docs: https://docs.anthropic.com/en/docs/bu...
    Github: https://github.com/anthropics/anthrop...
    https://youtu.be/Iabue7wtE4g?si=NlvvXGn80ZN80MxE In this video, I go through hands-on how to use the Anthropic computer use models and tools. Explain how they work and also show how you can get it started with Docker on your own computer. For more tutorials on using LLMs and building agents, check out my Patreon Patreon: / samwitteveen Twitter: / sam_witteveen Computer Use: https://www.anthropic.com/news/develo... Computer Use Docs: https://docs.anthropic.com/en/docs/bu... 👨‍💻Github: https://github.com/anthropics/anthrop...
    ·587 Views ·0 Reviews
  • https://youtu.be/Ajeb_GuHBDQ?si=LTli2wDF9BMK2Biw&t=225

    How to use UI-TARS on your deskopt using VLLM (Start at checkpoint 3:45)
    https://youtu.be/Ajeb_GuHBDQ?si=LTli2wDF9BMK2Biw&t=225 How to use UI-TARS on your deskopt using VLLM (Start at checkpoint 3:45)
    Wow
    1
    ·515 Views ·0 Reviews
  • https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo

    2025 is shaping up to be a transformative year in artificial intelligence (AI) as Chinese tech giants take the lead with groundbreaking announcements that are setting new standards for innovation. While DeepSeek R1 made waves with its reasoning capabilities last week, and ByteDance’s Doubao 1.5 Pro stunned observers by outperforming GPT-4o, Qwen AI has dropped a bombshell: two open-source models that handle 1 million tokens of context, enough to process the entire Lord of the Rings trilogy in one go

    Useful Links:
    - https://chat.qwenlm.ai/
    - https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm
    - https://qwenlm.github.io/blog/qwen2.5-max/
    - https://www.analyticsvidhya.com/blog/2025/01/qwen2-5-max/
    https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo 2025 is shaping up to be a transformative year in artificial intelligence (AI) as Chinese tech giants take the lead with groundbreaking announcements that are setting new standards for innovation. While DeepSeek R1 made waves with its reasoning capabilities last week, and ByteDance’s Doubao 1.5 Pro stunned observers by outperforming GPT-4o, Qwen AI has dropped a bombshell: two open-source models that handle 1 million tokens of context, enough to process the entire Lord of the Rings trilogy in one go Useful Links: - https://chat.qwenlm.ai/ - https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm - https://qwenlm.github.io/blog/qwen2.5-max/ - https://www.analyticsvidhya.com/blog/2025/01/qwen2-5-max/
    HUGGINGFACE.CO
    Qwen2.5 Max Demo - a Hugging Face Space by Qwen
    Discover amazing ML apps made by the community
    ·698 Views ·0 Reviews
  • Some suggestions to deal with the Rate limit crisis ...
    Useful links:
    - https://github.com/cline/cline/discussions/871?sort=new
    - https://github.com/RooVetGit/Roo-Code
    - https://github.com/cline/cline/issues/923
    - https://github.com/cline/cline/issues/713

    Useful Solutions to consider:
    1. Try 1 api request per minute seems like its not causing anymore 429 error so I think this would be a good idea

    2. This issue seems to be related to the amount of context being sent to the API every time. The further you get in a conversation the more context there is, and by default Anthropic imposes a limit of 40,000 tokens up (though it seems like you have 80,000 somehow).
    One potential fix for this would be to just have Cline auto retry when it gets a rate limit error.

    2.1 Another would be to make it so that Cline heavily truncates the amount of context being sent back to the API (I believe Cursor and Windsurf do this). Neither solution is perfect though. Truncating the context leads to far more bad outcomes; while an auto-retry means that you're going to be sitting there and waiting a while for your response.

    3. I am not hitting rate limit issues with https://github.com/RooVetGit/Roo-Cline

    4. You can use the open router of claude sonnet 3.5. The limit is gone.

    5. For how long and how many repeated attemps to edit it? It is collecting context for the whole time. If you ask it to change something 30times in small file it can be hunded or thousands of tokens because the previous context. If I just cut the task into smaller ones i do not hit any limit of course.
    It looks like a lot of people have absolutely no idea how LLM AI works.

    6. For me it is failing inly when I go over 2USD in total price for current task. For several steps it is ok and then it starts limiting, but really if I wait for 5 minutes it is doable (just few times I hit the 200k token limit).

    7. I think the best solution is to create a wait/sleep option on cline that when it receives this error, it waits for 60 seconds and try again.
    Some suggestions to deal with the Rate limit crisis ... Useful links: - https://github.com/cline/cline/discussions/871?sort=new - https://github.com/RooVetGit/Roo-Code - https://github.com/cline/cline/issues/923 - https://github.com/cline/cline/issues/713 Useful Solutions to consider: 1. Try 1 api request per minute seems like its not causing anymore 429 error so I think this would be a good idea 2. This issue seems to be related to the amount of context being sent to the API every time. The further you get in a conversation the more context there is, and by default Anthropic imposes a limit of 40,000 tokens up (though it seems like you have 80,000 somehow). One potential fix for this would be to just have Cline auto retry when it gets a rate limit error. 2.1 Another would be to make it so that Cline heavily truncates the amount of context being sent back to the API (I believe Cursor and Windsurf do this). Neither solution is perfect though. Truncating the context leads to far more bad outcomes; while an auto-retry means that you're going to be sitting there and waiting a while for your response. 3. I am not hitting rate limit issues with https://github.com/RooVetGit/Roo-Cline 4. You can use the open router of claude sonnet 3.5. The limit is gone. 5. For how long and how many repeated attemps to edit it? It is collecting context for the whole time. If you ask it to change something 30times in small file it can be hunded or thousands of tokens because the previous context. If I just cut the task into smaller ones i do not hit any limit of course. It looks like a lot of people have absolutely no idea how LLM AI works. 6. For me it is failing inly when I go over 2USD in total price for current task. For several steps it is ok and then it starts limiting, but really if I wait for 5 minutes it is doable (just few times I hit the 200k token limit). 7. I think the best solution is to create a wait/sleep option on cline that when it receives this error, it waits for 60 seconds and try again.
    GITHUB.COM
    GitHub - RooVetGit/Roo-Code: Roo Code (prev. Roo Cline) is a VS Code plugin that enhances coding with AI-powered automation, multi-model support, and experimental features
    Roo Code (prev. Roo Cline) is a VS Code plugin that enhances coding with AI-powered automation, multi-model support, and experimental features - RooVetGit/Roo-Code
    ·677 Views ·0 Reviews
More Results