Week 2

We’ve decided to build our project using the Gemini local LLM platform. Making that choice was a turning point for the group—it allowed us to stop debating and start defining our actual technical requirements.

The Technical Reality Check

One of the first things we realized is that running local LLMs isn’t “resource-light.” Through some initial testing, we’ve established our hardware baseline: we’re looking at machines with at least 16 GB of RAM and more than 4 Cores. Having these constraints settled early is a huge relief because it gives us a clear sandbox to play in and prevents us from designing something that our own hardware can’t actually execute.

What I’ve Discovered About Collaboration

The most rewarding (and surprising) part of this process hasn’t been the code, but the group dynamics. I’ve realized that my role in this team is centered around alignment.

I discovered something about myself during our brainstorming sessions: I have a deep-seated need for consensus. I’ve learned that you can’t just push a “good idea” and expect it to fly. People’s excitement and their willingness to put in the late-night hours are completely contextual to how much they feel “bought-in” to the vision.

Wins and Roadblocks

The Good: Once we hit that deep consensus, the momentum changed. The energy in the “room” shifted from “What should we do?” to “How do we build this?”

The Challenges: We are still navigating the learning curve of the Gemini local platform and ensuring everyone on the team has the environment access they need, especially given those hardware requirements.

Written before or on February 8, 2026