
Vibe Modeling: Turning Prompts into Parametric Prints
Author(s): Nicolas CHOURROUT
Originally published on Towards AI.
TL;DR I built a “vibe‑modeling” workflow — ask an AI for a part, tweak it if needed, and download a ready‑to‑print 3D file. Below I show the pipeline, dive into real prints examples, and talk about why this is thrilling for beginners yet still a second‑string tool for seasoned CAD users.

What is “Vibe Modeling”?
Think of vibe modeling as the CAD twin of vibe coding: you suggest the idea (“sleek bowl”, “iPhone stand”) and let the LLM handle the math. From there you can keep chatting with the bot to iterate or nudge the generated sliders to fine‑tune dimensions — staying in the creative flow instead of spending hours in Fusion 360 — until the design feels right.
The Chat To STL App
I built a basic Streamlit application to demonstrate the concept.
Type your description in the chat and o4‑mini does what LLMs do best — write code. In this case the code is plain OpenSCAD (a free, script‑based CAD language), so every line the model emits translates directly into geometry. Peek at that script in a collapsible panel or watch it materialise instantly in an interactive 3D viewport beside the chat. Refine the design by sending a follow‑up prompt or by nudging the auto‑generated sliders (wall thickness, diameters, hole count…). When it looks right, grab a ready‑to‑print STL or 3MF with one click.
Full repository (bring your own OpenAI key): github.com/nchourrout/chat-to-stl
GitHub – nchourrout/Chat-To-STL: 3D Designer Agent using OpenAI and OpenSCAD
3D Designer Agent using OpenAI and OpenSCAD. Contribute to nchourrout/Chat-To-STL development by creating an account on…
github.com
Edit: now you can also try it online on Huggingface 🤗
Example 1 — Catch‑all bowl 🥣
Prompt: “A catch‑all bowl with “have a good day” engraved on the bottom.”

The model nailed it on the very first run — no slider tweaks, no re‑prompting. I sliced the STL, sent it to my Bambu Lab A1 3D printer, and 45 minutes later pulled the dish off the plate.
Example 2 — Swiss‑cheese door stop 🧀
Initial prompt: “door stop, wedge shaped like a swiss cheese with holes”

The first render nailed the wedge shape but the “holes” were all clumped together. I re‑prompted the bot to subtract random‑sized spheres scattered through the body; one pass later the stop sported true Emmental style holes and still wedged the door solid.
Example 3 —Pushing the limits with a backpack hook for an IKEA desk 🎒
Initial prompt: “S-shaped backpack hook I can mount on a table top”


Getting this right took roughly ten iterations 😅. I had to spell out the S‑shaped profile, tell the model to extrude that outline, and then ask for precise fillets where I wanted the curves. Once the render finally looked good, the AI surfaced sliders for dimensions and a few nudges later the clamp fit my IKEA Linnmon tabletop like a glove. Printed in PLA (though PETG would be even better), it now holds a fully‑loaded backpack. The final hook feels plenty sturdy, yet the iterative back‑and‑forth highlighted that knocking out a quick sketch in Fusion 360 would have been faster.
Bonus Example — A Star shaped box ⭐️
Aren’t there already AI models for 3D?
Projects like OpenAI’s Shap‑E, Google’s DreamFusion, and NVIDIA’s GET3D can turn text into 3D meshes, but those outputs are built for screens, not manufacturing. They produce voxel (3D pixel) clouds that cannot be easily customized. Vibe modeling takes a different route: the AI writes parametric OpenSCAD solids that are easily editable and can be sliced cleanly for real‑world, functional prints.
Where it falls short
- Simple shapes only. Right now the AI shines with basic objects. Even just asking it for a mug can lead to a completely misplaced handle.
- Needs CAD‑style language. Prompts work best if you speak in sketch‑and‑extrude terms (“draw a 50 × 20 mm rectangle, extrude 5 mm, fillet 3 mm”).
- No self‑critique. Currently, the model can’t see what the rendered 3D objects look like. However, I’m working on a version that would feed the render preview image back so the AI can spot errors on its own.
Who should use this?
- Novices & weekend makers — skip the CAD learning cliff; vibe‑model a quick fix in minutes.
- Educators — teach parametric thinking without a complex UI.
- Not for Power users — for intricate sketches or assemblies, Fusion 360/SolidWorks is still the way to go… but for how long?
If you use the app, tag me — can’t wait to see what you design with it!
Need custom AI tooling? We build tailored solutions at flowful.ai.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.