We recently migrated to uv from poetry because we wanted to benefit from it's many features such as
Easier dependency management with automatic caching built in
Significantly faster CI/CD compared to poetry, especially when we use the caching functionality provided by the Astral team
Cargo-style lockfile that makes it easier to adopt new PEP features as they come out
We took around 1-2 days to handle the migration and we're happy with the results. On average, for CI/CD, we've seen a huge speed up for our jobs.
Here are some timings for jobs that I took from our CI/CD runs.
In general I'd say that we saw a ~3x speedup with approximately 67% reduction in time needed for the jobs once we implemented caching for the individual uv github actions.
Multimodal Language Models like gpt-4o excel at processing multimodal, enabling us to extract rich, structured metadata from images.
This is particularly valuable in areas like fashion where we can use these capabilities to understand user style preferences from images and even videos. In this post, we'll see how to use instructor to map images to a given product taxonomy so we can recommend similar products for users.
Language Models struggle to generate consistent graphs that have a large number of nodes. Often times, this is because the graph itself is too large for the model to handle. This causes the model to generate inconsistent graphs that have invalid and disconnected nodes among other issues.
In this article, we'll look at how we can get around this limitation by using a two-phase approach to generate complex DAGs with gpt-4o by looking at a simple example of generating a Choose Your Own Adventure story.
Language Models struggle to generate consistent graphs that have a large number of nodes. Often times, this is because the graph itself is too large for the model to handle. This causes the model to generate inconsistent graphs that have invalid and disconnected nodes among other issues.
In this article, we'll look at how we can get around this limitation by using a two-phase approach to generate complex DAGs with gpt-4o by looking at a simple example of generating a Choose Your Own Adventure story.
Messy data exports are a common problem. Whether it's multiple headers in the table, implicit relationships that make analysis a pain or even just merged cells, using instructor with structured outputs makes it easy to convert messy tables into tidy data, even if all you have is just an image of the table as we'll see below.
Let's look at the following table as an example. It makes analysis unnecessarily difficult because it hides data relationships through empty cells and implicit repetition. If we were using it for data analysis, cleaning it manually would be a huge nightmare.
We're excited to announce that instructor now supports Writer's enterprise-grade LLMs, including their latest Palmyra X 004 model. This integration enables structured outputs and enterprise AI workflows with Writer's powerful language models.
First, make sure that you've signed up for an account on Writer and obtained an API key using this quickstart guide. Once you've done so, install instructor with Writer support by running pip install instructor[writer] in your terminal.
Make sure to set the WRITER_API_KEY environment variable with your Writer API key or pass it as an argument to the Writer constructor.
In this post, we'll explore how to use Google's Gemini model with Instructor to generate accurate citations from PDFs. This approach ensures that answers are grounded in the actual content of the PDF, reducing the risk of hallucinations.
We'll be using the Nvidia 10k report for this example which you can download at this link.
Processing PDFs programmatically has always been painful. The typical approaches all have significant drawbacks:
PDF parsing libraries require complex rules and break easily
OCR solutions are slow and error-prone
Specialized PDF APIs are expensive and require additional integration
LLM solutions often need complex document chunking and embedding pipelines
What if we could just hand a PDF to an LLM and get structured data back? With Gemini's multimodal capabilities and Instructor's structured output handling, we can do exactly that.
importinstructorimportgoogle.generativeaiasgenaifromgoogle.ai.generativelanguage_v1beta.types.fileimportFilefrompydanticimportBaseModelimporttime# Initialize the clientclient=instructor.from_gemini(client=genai.GenerativeModel(model_name="models/gemini-1.5-flash-latest",))# Define your output structureclassSummary(BaseModel):summary:str# Upload the PDFfile=genai.upload_file("path/to/your.pdf")# Wait for file to finish processingwhilefile.state!=File.State.ACTIVE:time.sleep(1)file=genai.get_file(file.name)print(f"File is still uploading, state: {file.state}")print(f"File is now active, state: {file.state}")print(file)resp=client.chat.completions.create(messages=[{"role":"user","content":["Summarize the following file",file]},],response_model=Summary,)print(resp.summary)
Expand to see Raw Results
summary="Gemini 1.5 Pro is a highly compute-efficient multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. It achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra's state-of-the-art performance across a broad set of benchmarks. Gemini 1.5 Pro is built to handle extremely long contexts; it has the ability to recall and reason over fine-grained information from up to at least 10M tokens. This scale is unprecedented among contemporary large language models (LLMs), and enables the processing of long-form mixed-modality inputs including entire collections of documents, multiple hours of video, and almost five days long of audio. Gemini 1.5 Pro surpasses Gemini 1.0 Pro and performs at a similar level to 1.0 Ultra on a wide array of benchmarks while requiring significantly less compute to train. It can recall information amidst distractor context, and it can learn to translate a new language from a single set of linguistic documentation. With only instructional materials (a 500-page reference grammar, a dictionary, and ≈ 400 extra parallel sentences) all provided in context, Gemini 1.5 Pro is capable of learning to translate from English to Kalamang, a Papuan language with fewer than 200 speakers, and therefore almost no online presence."
The combination of Gemini and Instructor offers several key advantages over traditional PDF processing approaches:
Simple Integration - Unlike traditional approaches that require complex document processing pipelines, chunking strategies, and embedding databases, you can directly process PDFs with just a few lines of code. This dramatically reduces development time and maintenance overhead.
Structured Output - Instructor's Pydantic integration ensures you get exactly the data structure you need. The model's outputs are automatically validated and typed, making it easier to build reliable applications. If the extraction fails, Instructor automatically handles the retries for you with support for custom retry logic using tenacity.
Multimodal Support - Gemini's multimodal capabilities mean this same approach works for various file types. You can process images, videos, and audio files all in the same api request. Check out our multimodal processing guide to see how we extract structured data from travel videos.
By combining Gemini's multimodal capabilities with Instructor's structured output handling, we can transform complex document processing into simple, Pythonic code.
No more wrestling with parsing rules, managing embeddings, or building complex pipelines – just define your data model and let the LLM do the heavy lifting.
If you liked this, give instructor a try today and see how much easier structured outputs makes working with LLMs become. Get started with Instructor today!
Are you struggling with irrelevant search results in your Retrieval-Augmented Generation (RAG) pipeline?
Imagine having a powerful tool that can intelligently reassess and reorder your search results, significantly improving their relevance to user queries.
In this blog post, we'll show you how to create an LLM-based reranker using Instructor and Pydantic. This approach will:
Enhance the accuracy of your search results
Leverage the power of large language models (LLMs)
Utilize structured outputs for precise information retrieval
By the end of this tutorial, you'll be able to implement a llm reranker to label your synthetic data for fine-tuning a traditional reranker, or to build out an evaluation pipeline for your RAG system. Let's dive in!