| apps | ||
| cofounder | ||
| .gitignore | ||
| LICENSE | ||
| README.md | ||
| TODO.md | ||
[img]
cofounder : early alpha release
- project - cofounder.openinterface.ai
- 👋 @n_raidenai
cofounder
- full stack generative web apps ; backend + db + stateful web apps
- gen ui rooted in app architecture, with ai-guided mockup designer & modular design systems
[demo]
Important
Early alpha release ; earlier than expected by 5/6 weeks
Still not merged with key target features of the project, notably :
- project iteration modules for all dimensions of generated projects
- admin interface for event streams and (deeper) project iterations
- integrate the full genUI plugin :
- generative design systems
- deploy finetuned models & serve from api.cofounder
- local, browser-based dev env for the entire project scope
- add { react-native , flutter , other web frameworks }
- validations & swarm code review and autofix
- code optimization
- [...]
Be patient :)
Usage
Install & Init
- Open your terminal and run
npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"
Follow the instructions. The installer
- will ask you for your keys
- setup dirs & start installs
- will start the local
cofounder/apibuilder and server - will start generating your app 🎉
note :
you will be asked for a cofounder.openinterface.ai key
it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features
and can be used without limits during the current early alpha period
the full index will be available for local download on v1 release
Run
Your backend & vite+react web app will incrementally generate inside ./apps/{YourApp}
Open your terminal in ./apps/{YourApp} and run
npm i && npm run dev
It will start both the backend and vite+react, concurrently, after installing their dependencies
Go to http://localhost:5173/ to open the web app 🎉
Notes
Local API
If you resume later and would like to iterate on your generated apps,
the local ./cofounder/api server needs to be running to receive queries
You can (re)start the local cofounder API running the following command from ./cofounder/api
npm run start
You can also generate new apps from the same env by running, from ./cofounder/api, one of these command
npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"
npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
Concurrency
[the architecture will be further detailed and documented later]
Every "node" in the cofounder architecture has a defined configuration under ./cofounder/api/system/structure/nodes/{category}/{name}.yaml to handle things like concurrency, retries and limits per time interval
For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under ./cofounder/api/system/structure/sequences/{definition}.yaml ),
go to
#./cofounder/api/system/structure/nodes/op/llm.yaml
nodes:
op:LLM::GEN:
desc: "..."
in: [model, messages, preparser, parser, query, stream]
out: [generated, usage]
queue:
concurrency: 1 # <------------------------------- here
op:LLM::VECTORIZE:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
mapreduce: true
op:LLM::VECTORIZE:CHUNK:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
queue:
concurrency: 50
and change the op:LLM::GEN parameter concurrency to a higher value
The default LLM concurrency is set to 1 so you can see what's happening in your console streams step by step - but you can increment it to 5-8
Docs, Design Systems, ...
[WIP]
Architecture
[img]
Some Credits
- Cover art edited from image found in patriciaklein.de
- Demo design systems built using Figma renders / UI kits from:
- blocks.pm by Hexa Plugin (see
cofounder/api/system/presets) - google material
- figma core
- shadcn
- blocks.pm by Hexa Plugin (see