Show HN: Otto-m8 – A low code AI/ML API deployment Platform https://ift.tt/sF6RxJV
Show HN: Otto-m8 – A low code AI/ML API deployment Platform Hi all, so I've been working on this low to no code platform that allows you to spin up deep learning workloads(I'm talking LLM's, Huggingface models, etc), interconnect a bunch of them, and deploy them as API's. The idea essentially came up early in September, when experimenting with combining a Huggingface based BERT model with an LLM at work, and I realized it would be cool if I could do that instantly(especially since it was a prototype). At the time, I was considering a platform that could essentially help you train deep learning models without any code. It was my observation that much of the code required to train or even run inference on HF models have matured significantly. But before I solved that problem, I wanted to solve inference. Initially inspired by n8n and AWS Cloudformation, I built out otto-m8 (translates to automate). Given a json payload that lists out all the resources, and how each model is interconnected, launch it as one-off API the user can query. And thanks to Reactflow, the UI was just something I couldn't just not implement. And as I built it out, I did not want to miss out on the LLM and Agent bit. With otto-m8, today, you can launch complex workflows by interconnecting HF models and LLM's(currently it supports OpenAI and Ollama only). But I like to see it being more than that. At the core, every workflow is an input process output model. Inputs get processed and there's an output. Therefore, with the way things are setup, one can integrate almost anything and make it interconnectable. Project Link: https://ift.tt/Mpaqij1 Let me know what you guys think. I really would love feedback! https://ift.tt/Mpaqij1 December 22, 2024 at 02:39AM
No comments