We brought a unique coding platform and cohort-based model to the American market. And it didn’t go as planned. Here is why, and how we fixed it.
Launching an EdTech product in a competitive and foreign market is challenging — and yet exciting. You never really know if it will work. Even after studying other startups’ failures and successes, analyzing competitors, and tracking market trends, there’s always something unexpected to prepare for. In this article, let’s explore a real EdTech product case: how we changed one of our product’s core features for a new market - and how it turned out.
🛬 A Coding Platform and Structured Cohort-Based Learning Model
At the time of launching a product, our team was small, and we’d seen how well the structured, interactive learning system worked in an online learning system. It seemed logical to try the same setup: an interactive coding platform, a cohort-based program with strict deadlines, active support from mentors and tutors, and a strong, structured curriculum.
The team did the homework - we analyzed the market, looked at competitors, prepared a plan - and yes, some main competitors already have been offering similar models, with mentoring or cohort elements. Not everyone had a custom learning platform, though, so we thought we had a technical advantage there.
So, we launched with a classic EdTech format: students start in a cohort on a fixed date, progress together, and if they fall too far behind, they take an “academic break” and rejoin a later cohort. It made sense to us. It worked well with a lot of markets and users. In the U.S.? It turned out not so much.
💥 What Went Wrong
American users didn’t really click with the model. Even though the feedback at first was great and positive, and there were some really strong cohorts where students bonded and made it all the way through together - retention there was over 75% even a year later. But in general, financial metrics and learning analytics clearly showed that there were issues in motivation, retention, and sticking to the deadlines.
Different time zones of students and tutors, the need for more flexibility, and the expectation of being able to balance between part-time learning and real life - it all didn’t go well with our initial structure. The coding projects were technically challenging and time-intensive. Deadlines were hard. Students get stressed. And stress led to churn.
The academic break system didn’t help as planned - in fact, it hurt retention badly. It felt like a failure to the students, even if that wasn’t the intention. So, we had to rethink the whole thing ASAP before it was too late. One of the challenges was to implement something new from scratch in the middle of the learning process on the online platform and ruin users’ progress and our promises. It also was quite bounded to the legal side of educational services, and this is something that should be considered from the first place.
🔄 Rebuilding the Learning Model and Switching to Something New From Scratch
We ran a full discovery process. We spoke to the teams, independent experts and consultants, students, competitors, and U.S.-based learning designers. We dove into best practices across global EdTech. And based on what we learned, there was a full redesign of our learning model and platform technical opportunities.
We shifted from strict cohort-based learning to a flexible model, introducing recommended deadlines as milestones, reducing the emotional pressure of failure. We redesigned the platform logic, the tone of our communications, the way we positioned the product, and the structure of student support.
Instead of being tied to a single tutor, students had access to multiple mentors at once - this expanded their network and let them learn from different industry professionals by attending workshops conducted by different experts. We removed “academic breaks” and replaced them with a limited number of “extra weeks” students could request if they needed more time.
We even started to notify them ahead of time: “This stage might be tricky — save your ‘ladder’ for later.” Like in a game, and it helped. It made the experience much more personal, more flexible, and much more aligned with American learning culture. Retention was thriving, and we got plenty of positive feedback and boosted financial metrics along with learning analytics. However, it was not something we had to stop at - EdTech is constantly changing, and even now, we implemented plenty of changes to this structure, yet keep providing our users with personalized flexibility, but better managed.
🎯 How to Listen to Vague Feedback and Create a Hypothesis
Talking to the users is known as one of the most important parts of product development. And we constantly did it via feedback surveys, customer development interviews, UX research, etc. But the issue was that feedback was sometimes polite but vague. Especially in early interviews, we’d get generic answers that didn’t really help us validate specific hypotheses.
For example, when users drop out, they might say “financial reasons.” But... what does that actually mean?
Did we not offer the right payment plans? Were they unmotivated? Or maybe they actually didn’t like the course - but didn’t want to say that?
What always helped when we were in the dark was that we reviewed their learning experience journeys in detail before going on the UX interview. Sometimes, we could see early predictors of churn or dropping motivation. Or it could be a totally different situation - having highly motivated students and unexpected refund requests with vague answers. It was tough when the feedback lacked specificity.
Compared to students from other markets we worked with - who were often more direct - this felt like trying to read between the lines sometimes.
Even exit interviews weren’t always that helpful. So, we had to learn how to listen better. And here is when active listening proved itself as a real deal. On calls, we started tossing in hypotheses like: “So, would a feature like this have helped you stay on track?” And suddenly, they’d open up with way more detail. Another example was to offer students to share their feedback with team members related to the problem - if it was financial, then it was the sales or Product Manager responsible for the payment options.
In case of technical or curriculum problems, we sometimes invited several team members on the UX interview - from development and curriculum teams. In addition to the interviews, we completely changed our feedback surveys and rephrased questions - it did make a difference as well as a preliminary step before the exit interview. We also made sure that people involved in the interviews knew the product from within and completed at least a small part of it as if they were students so that they would speak with them using the same language.
🤖 Offering Free Content Wasn’t as Attractive as We Thought
Another thing that didn’t land? Letting users self-explore the platform and our product.
We’d hoped students would find the course, go through a free trial, check how great and interactive it was, book a call with sales if interested, and enroll. But that path rarely worked, especially since the idea was to “try something with no commitment.”
It actually worked way better when a sales representative - admission advisor or career advisor - talked to them first, walked them through career paths, compared programs, helped them reflect on their goals and motivation, discussed financial plans, and saw if there was a match.
That human connection created clarity and trust, and self-service never quite matched that. What also worked greatly was offering users to start any program earlier with a very affordable fee (around $100) with platform and expert guidance, career advice, and support from our side. There was no commitment to prolong this fee and turn it into program payment - just to explore the platform with all features - not alone, but with personalized support right from the start. This way, both the company and the user were committed and interested in the outcome, and we guided students all the way long without pressure.
Some Final Thoughts
You can’t just copy-paste a model from one country to another and expect it to fit perfectly. Even when the format is strong and the platform is solid, culture and user behavior shape everything - from how people learn to how they give feedback and to what they expect from tech.
We made a lot of assumptions. We got plenty of things wrong. But we listened, iterated, and rebuilt key parts of the experience - and we were not afraid of doing so. And eventually, it worked. Students responded. Retention improved. The learning journey felt more natural, more empowering, and more engaging.