mirror of
https://github.com/raidendotai/cofounder
synced 2025-01-22 18:48:02 +00:00
159 lines
4.4 KiB
Markdown
159 lines
4.4 KiB
Markdown
|
|
![cofounder-og](https://github.com/user-attachments/assets/a9cda40f-0bf8-423b-815d-d8d6891f7ce3)
|
|
|
|
# Cofounder | Early alpha release
|
|
|
|
* project - [cofounder.openinterface.ai](https://cofounder.openinterface.ai)
|
|
* 👋 [@n_raidenai](https://x.com/n_raidenai)
|
|
|
|
**cofounder**
|
|
- full stack generative web apps ; backend + db + stateful web apps
|
|
- gen ui rooted in app architecture, with ai-guided mockup designer & modular design systems
|
|
|
|
|
|
|
|
https://github.com/user-attachments/assets/b78d25aa-7cd3-49ee-a6f8-a952a77c8096
|
|
|
|
|
|
|
|
---
|
|
|
|
## Important
|
|
|
|
**Early alpha release ; earlier than expected by 5/6 weeks**
|
|
|
|
Still not merged with key target features of the project, notably :
|
|
- project iteration modules for all dimensions of generated projects
|
|
- admin interface for event streams and (deeper) project iterations
|
|
- integrate the full genUI plugin :
|
|
* generative design systems
|
|
* deploy finetuned models & serve from api.cofounder
|
|
- local, browser-based dev env for the entire project scope
|
|
- add { react-native , flutter , other web frameworks }
|
|
- validations & swarm code review and autofix
|
|
- code optimization
|
|
- [...]
|
|
|
|
Be patient :)
|
|
|
|
---
|
|
|
|
# Usage
|
|
|
|
## Install & Init
|
|
|
|
* Open your terminal and run
|
|
|
|
```sh
|
|
npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"
|
|
```
|
|
|
|
Follow the instructions. The installer
|
|
- will ask you for your keys
|
|
- setup dirs & start installs
|
|
- will start the local `cofounder/api` builder and server
|
|
- will start generating your app 🎉
|
|
|
|
```
|
|
note :
|
|
you will be asked for a cofounder.openinterface.ai key
|
|
it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features
|
|
and can be used without limits during the current early alpha period
|
|
|
|
the full index will be available for local download on v1 release
|
|
```
|
|
|
|
## Run
|
|
|
|
Your backend & vite+react web app will incrementally generate inside `./apps/{YourApp}`
|
|
Open your terminal in `./apps/{YourApp}` and run
|
|
|
|
```sh
|
|
npm i && npm run dev
|
|
```
|
|
|
|
It will start both the backend and vite+react, concurrently, after installing their dependencies
|
|
Go to `http://localhost:5173/` to open the web app 🎉
|
|
|
|
## Notes
|
|
|
|
### Local API
|
|
|
|
If you resume later and would like to iterate on your generated apps,
|
|
the local `./cofounder/api` server needs to be running to receive queries
|
|
|
|
You can (re)start the `local cofounder API` running the following command from `./cofounder/api`
|
|
|
|
```sh
|
|
npm run start
|
|
```
|
|
|
|
You can also generate new apps from the same env by running, from `./cofounder/api`, one of these command
|
|
|
|
```sh
|
|
npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"
|
|
npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
|
|
```
|
|
|
|
### Concurrency
|
|
|
|
**[the architecture will be further detailed and documented later]**
|
|
|
|
Every "node" in the `cofounder` architecture has a defined configuration under `./cofounder/api/system/structure/nodes/{category}/{name}.yaml` to handle things like concurrency, retries and limits per time interval
|
|
|
|
For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under `./cofounder/api/system/structure/sequences/{definition}.yaml` ),
|
|
go to
|
|
|
|
```yaml
|
|
#./cofounder/api/system/structure/nodes/op/llm.yaml
|
|
nodes:
|
|
op:LLM::GEN:
|
|
desc: "..."
|
|
in: [model, messages, preparser, parser, query, stream]
|
|
out: [generated, usage]
|
|
queue:
|
|
concurrency: 1 # <------------------------------- here
|
|
op:LLM::VECTORIZE:
|
|
desc: "{texts} -> {vectors}"
|
|
in: [texts]
|
|
out: [vectors, usage]
|
|
mapreduce: true
|
|
op:LLM::VECTORIZE:CHUNK:
|
|
desc: "{texts} -> {vectors}"
|
|
in: [texts]
|
|
out: [vectors, usage]
|
|
queue:
|
|
concurrency: 50
|
|
```
|
|
|
|
and change the `op:LLM::GEN` parameter `concurrency` to a higher value
|
|
|
|
The default LLM concurrency is set to `1` so you can see what's happening in your console streams step by step - but you can increment it to `5`-`8`
|
|
|
|
---
|
|
|
|
# Docs, Design Systems, ...
|
|
|
|
**[WIP]**
|
|
|
|
---
|
|
|
|
# Architecture
|
|
|
|
[more details later]
|
|
|
|
archi/v1 is as follows :
|
|
|
|
![architecture](https://github.com/user-attachments/assets/b2d8b70e-7a6d-45c9-a706-0cf955d13451)
|
|
|
|
|
|
---
|
|
|
|
# Some Credits
|
|
|
|
- Cover art edited from image found in [patriciaklein.de](https://patriciaklein.de)
|
|
- Demo design systems built using Figma renders / UI kits from:
|
|
* blocks.pm by Hexa Plugin (see `cofounder/api/system/presets`)
|
|
* google material
|
|
* figma core
|
|
* shadcn |