Not quite. You should clarify a bit more. The README has this about their license.
"Certain features - such as Morphik Console - are not available in the open-source version. Any feature in the ee namespace is not available in the open-source version and carries a different license. Any feature outside that is open source under the MIT expat license."
Thanks we should have been more clear. The part in ee is our UI, which can be used to test or in dev environments. The main code, including API, SDK, and the entire backend logic is MIT expat.
I'd love to have something like this but calling a cloud is a no-go for me. I have a half baked tool that a friend of mine and I applied to the Mozilla Builders Grant with (didn't get in), it's janky and I don't have time to work on it right now but it does the thing. I also find myself using OpenWebUI's context RAG stuff sometimes but I'd really like to have a way to dump all of my private documents into a DB and have search/RAG work against them locally, preferably in a way that's agnostic of the LLM backend.
You can run this fully locally using Ollama for inference, although you'll need larger models and a beefy machine for great results. On my end llama 3.2 8B does a good job on technical docs, but bigger the better lol.
Ahh, I didn't see that, I just saw them talking about a free tier or whatever and my eyes glazed over. I'll try it out with Mistral-small 3.1 at some point tonight, I've been having really great results with it's multimodal understanding.
Just curious, are you fine with running things in your own AWS / Azure / GCP account or do you really mean that the solution has to be fully on-premise?
Airgapped. It really makes threat modelling so so soooo much easier. It's temporal so if I were being attacked by a state level actor exfiltration is possible but this specific application I either have the data live and no internet, or internet and no data. I also have some lesser stuff that I allow on-prem w/ internet and just trust the firewall, but absolutely no way am I doing any sensitive data storage or inference in the cloud.
Since people will be curious, one lesser thing I used this for is a diary/assistant and it's nice to have the peace of mind that I can dump my inner most thoughts without any concern for oversharing.
The architecture sounds very, very promising. Normalizing entities and relations to put in a graph for RAG sounds great. (I'm still a bit unclear on ingesting or updating existing graphs.)
Curious about suitability of this for PDF's as conference presentation slides vs academic papers. Is this sensitive or tunable to such distinctions?
Looking for tests/validation; are they all in the evaluation folder? A Pharma example would be great.
Thank you for documenting the telemetry. I appreciate the ee commercialization dance :)
For ingesting graphs, you can define a filter, or certain document ids. When updating, we look at if any other docs are added with that filer (or you can specify new doc ids). We then do entity and relationship extraction again, and do entity resolution with the existing graph to merge the two.
Creating graphs and entity resolution are both tunable with overrides, you can specify domain specific prompts and overrides (will add a pharma example!) (https://docs.morphik.ai/python-sdk/create_graph#parameters). I tried to add code, but was formatting badly, sorry for the redirect.
I'm currently building an internal tool using SurrealDB directly, but I'm curious to use Morphik since it implement features I hadn't the time to figure out yet. (For example, I started with hardcoded schemas and I like how you support both).
Minor nitpick, but the README for your ui-component project under ee says:
"License
This project is part of Morphik and is licensed under the MIT License."
However, your ee folder has an "enterprise" license, not the MIT license.
Looks cool! What are the compute requirements or recommendations for self-hosting Morphik? What are the scaling limits? Can you provide a sense for latencies for ingestion and retrieval as the index size grows?
Depending on the use case, it happily runs on my MacBook air M2 16GB ram with mps for small pdfs, and searching over 100-150 documents with colpali takes a 2-ish minutes. Very rough numbers. For ingestion, takes around 15-20-ish seconds a page, which is on the slower end. On an A100, it takes 4-5 seconds per page for ingestion using Colpali to run (we haven't performance optimized, or optimized batch sizes yet tho). Without Colpali it is much faster. Ingestion doesn't change much as size grows.
I'd be happy to report back after some testing, we are looking to optimize more of this soon, as speed is somewhat of a missing piece at the moment.
If you're using txts, then plain RAG built on top of any vector database can suffice depending on your queries (if they directly reference the text, or can be made to, then similarity search is good enough). If they are cross document, setting a high number of chunks with plain RAG to retrieve might also do a good job.
If you have tables, images, etc. then using a better extraction mechanism (maybe unstructured, or other document processors) and then creating the embeddings can also work well.
I'd say if docs are simple, then just building your own pipeline on top of a vector db is good!
Yeah we had an overload on the ingestion queue. If you try again will be much faster as we just moved to a beefier machine. (The previous ingestion will still work since it is in queue, but new ones will be faster)
Do you mean ingesting the extracted rectangles/ bounding boxes? We're actually working on bounding boxes, this is a good insight and we can add this to the product. However, the way we ingest is literally converting each page to an image then embedding that so the text, layout, diagrams are all encoded in. Would like to know what the exact use case is, can help you better
We have two ingestion pathways: 1. regular OCR + text embeddings; 2. Colpali. We've observed that Colpali does a much better job with tables since it can encode positional stuff and layouts as well.
Whenever I ask people wanting to use such features at scale which figure could be out of place or have a transposed digit it generally makes the project evaporate.
We’re open‑source under the MIT Expat license"
Not quite. You should clarify a bit more. The README has this about their license.
"Certain features - such as Morphik Console - are not available in the open-source version. Any feature in the ee namespace is not available in the open-source version and carries a different license. Any feature outside that is open source under the MIT expat license."
Thanks we should have been more clear. The part in ee is our UI, which can be used to test or in dev environments. The main code, including API, SDK, and the entire backend logic is MIT expat.
Couldn't upload files, all had error 'failed to fetch'
Hey! what format of files are you uploading? seems to work ok on my end...
I'd love to have something like this but calling a cloud is a no-go for me. I have a half baked tool that a friend of mine and I applied to the Mozilla Builders Grant with (didn't get in), it's janky and I don't have time to work on it right now but it does the thing. I also find myself using OpenWebUI's context RAG stuff sometimes but I'd really like to have a way to dump all of my private documents into a DB and have search/RAG work against them locally, preferably in a way that's agnostic of the LLM backend.
Does such a project exist?
You can run this fully locally using Ollama for inference, although you'll need larger models and a beefy machine for great results. On my end llama 3.2 8B does a good job on technical docs, but bigger the better lol.
Ahh, I didn't see that, I just saw them talking about a free tier or whatever and my eyes glazed over. I'll try it out with Mistral-small 3.1 at some point tonight, I've been having really great results with it's multimodal understanding.
how would you use this within open-web-ui locally?
Just curious, are you fine with running things in your own AWS / Azure / GCP account or do you really mean that the solution has to be fully on-premise?
Airgapped. It really makes threat modelling so so soooo much easier. It's temporal so if I were being attacked by a state level actor exfiltration is possible but this specific application I either have the data live and no internet, or internet and no data. I also have some lesser stuff that I allow on-prem w/ internet and just trust the firewall, but absolutely no way am I doing any sensitive data storage or inference in the cloud.
Since people will be curious, one lesser thing I used this for is a diary/assistant and it's nice to have the peace of mind that I can dump my inner most thoughts without any concern for oversharing.
What kind of hardware do you need for this setup?
A computer with a couple gaming GPUs, a lan cable you can unplug and an encrypted external hard drive to offline your sensitive data.
The architecture sounds very, very promising. Normalizing entities and relations to put in a graph for RAG sounds great. (I'm still a bit unclear on ingesting or updating existing graphs.)
Curious about suitability of this for PDF's as conference presentation slides vs academic papers. Is this sensitive or tunable to such distinctions?
Looking for tests/validation; are they all in the evaluation folder? A Pharma example would be great.
Thank you for documenting the telemetry. I appreciate the ee commercialization dance :)
For ingesting graphs, you can define a filter, or certain document ids. When updating, we look at if any other docs are added with that filer (or you can specify new doc ids). We then do entity and relationship extraction again, and do entity resolution with the existing graph to merge the two.
Creating graphs and entity resolution are both tunable with overrides, you can specify domain specific prompts and overrides (will add a pharma example!) (https://docs.morphik.ai/python-sdk/create_graph#parameters). I tried to add code, but was formatting badly, sorry for the redirect.
If it’s MIT open source, what does the paid part apply to?
The paid part applies to the ui-component which provides a chat user interface. The core code, SDK, api is all under MIT license.
I'm currently building an internal tool using SurrealDB directly, but I'm curious to use Morphik since it implement features I hadn't the time to figure out yet. (For example, I started with hardcoded schemas and I like how you support both).
Minor nitpick, but the README for your ui-component project under ee says:
"License This project is part of Morphik and is licensed under the MIT License."
However, your ee folder has an "enterprise" license, not the MIT license.
Thanks for pointing that out! Fixed it.
For the metadata extraction, we save these as Column(JSONB) for each documents which allows it to be changed on the fly.
Although, I keep wondering if it would have been better to use something like mongodb for this part, just because it's more natural.
Please let me know if you have questions and how it works out for you.
Looks cool! What are the compute requirements or recommendations for self-hosting Morphik? What are the scaling limits? Can you provide a sense for latencies for ingestion and retrieval as the index size grows?
Depending on the use case, it happily runs on my MacBook air M2 16GB ram with mps for small pdfs, and searching over 100-150 documents with colpali takes a 2-ish minutes. Very rough numbers. For ingestion, takes around 15-20-ish seconds a page, which is on the slower end. On an A100, it takes 4-5 seconds per page for ingestion using Colpali to run (we haven't performance optimized, or optimized batch sizes yet tho). Without Colpali it is much faster. Ingestion doesn't change much as size grows.
I'd be happy to report back after some testing, we are looking to optimize more of this soon, as speed is somewhat of a missing piece at the moment.
Should I use this if I don't plan on working with pdfs? What's the best RAG currently?
Depends on your document types.
If you're using txts, then plain RAG built on top of any vector database can suffice depending on your queries (if they directly reference the text, or can be made to, then similarity search is good enough). If they are cross document, setting a high number of chunks with plain RAG to retrieve might also do a good job.
If you have tables, images, etc. then using a better extraction mechanism (maybe unstructured, or other document processors) and then creating the embeddings can also work well.
I'd say if docs are simple, then just building your own pipeline on top of a vector db is good!
I uploaded a file and its been processing for over an hour now. No failure or anything. Maybe you should look into that.
Yeah we had an overload on the ingestion queue. If you try again will be much faster as we just moved to a beefier machine. (The previous ingestion will still work since it is in queue, but new ones will be faster)
Wait, your title says this "runs locally"?
Is this running a custom llm under the hood or?
ColQwen is basically a strict upgrade — would give it a go!
We do use ColQwen! Currently 2, but upgrading to 2.5 soon :)
How could I extract rectangles from PDF and then do something like this?
Do you mean ingesting the extracted rectangles/ bounding boxes? We're actually working on bounding boxes, this is a good insight and we can add this to the product. However, the way we ingest is literally converting each page to an image then embedding that so the text, layout, diagrams are all encoded in. Would like to know what the exact use case is, can help you better
Looks really nice! How does it handle tables?
We have two ingestion pathways: 1. regular OCR + text embeddings; 2. Colpali. We've observed that Colpali does a much better job with tables since it can encode positional stuff and layouts as well.
Whenever I ask people wanting to use such features at scale which figure could be out of place or have a transposed digit it generally makes the project evaporate.