paint-brush
How to Use Model Playground for No-Code Model Buildingby@hastyai
1,061 reads
1,061 reads

How to Use Model Playground for No-Code Model Building

by Hasty.aiNovember 4th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We are now launching Model Playground, a model experimentation and building environment where you can train and benchmark models on your data without writing any code yourself. But, you are still completely in control. Just like if you train a model locally.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Use Model Playground for No-Code Model Building
Hasty.ai HackerNoon profile picture

Today is a big day for Hasty. After many months in closed beta, we are finally ready to launch Model Playground into open beta. We think it’s very cool, but why should you care?


During the last few years, we have talked to many, many AI teams. One of the things we heard repeatedly is that the vision AI software stack has grown out of control. Most teams use one tool for annotation, another for data curation, a third for experiment tracking, and a fourth for serving models to production. That is not even including what organizations have built themselves for integrating all these tools. Today, companies are forced to maintain and connect their own data lakes, integrate data pipelines, or even maintain their in-house tools.


All this software and engineering comes with many problems. A summary of what we heard is:


  • Experimenting with different building blocks takes too long. At the start of a project, you don’t always have all the answers. Should you use Architecture A or B? Which optimizer works best on your use case? Building these experiments yourself in code can take a lot of time and slows projects down. We often see 3-7 month iteration periods.
  • Data and models exist in different silos. When you train a model, you first have to find the latest version of your data asset and then fetch it, often manually. If you train a good model, there is a lot of engineering needed to get that to assist with your annotation work. The obvious symbiosis between model and data is broken. Everyone knows that data and models should live side-by-side and that progress on one should benefit the other, but that seldom happens.
  • Engineering a pipeline is a pain but necessary today. The options are either building a pipeline where you have to integrate multiple tools and services from day one or kicking the can down the road and getting by with manual work. This setup time often delays new projects and takes focus away from what’s important - building a good data asset and training highly performant models.
  • Building your own MLOps solution is too expensive. Even if you get a pipeline setup for training models, packaging and deploying them to the edge or hosting them in the cloud comes with significant pain points and a high price tag.
  • It’s tough to keep track of the current state of a project. Different versions of data exist in different environments. Models are often trained on anything from laptops to the cloud. And experiment tracking is still done in spreadsheets. There’s no source of truth where you can understand what’s going on for the project as a whole without having to sit down and write emails or Slack your colleagues.
  • Everything comes with hidden costs. Every service and software wants a slice of your budget and building something yourself is even more expensive.


Our solution to all these problems outlined above is our new Model Playground. It combines with our existing Hasty product to deliver a true “one-stop-shop” for your vision AI needs. Now that we have your attention, let’s get into what makes Model Playground so unique.


What is Model Playground?

In short, Model Playground is a model experimentation and building environment where you can train and benchmark models on your data without writing any code yourself.


“Aha!,” you might say, “another no-code solution that has simplified model development down to a couple of clicks.” No, that’s not what we are talking about here.


We’ve seen many no-code solutions. We think they are fine for playing around, but almost all existing no-code solutions don’t give you absolute control of the parameters and come with a limited number of architectures.


We, on the other hand, don’t disempower ML engineers. We don’t know your use case. We haven’t seen your data. Based on that, simplifying model development when our primary customer base is ML engineers and solutions architects would be foolhardy. We can’t deliver the results. You can!


So, we engineered a solution where you still pick everything, from architecture to transforms, yourself. We just take care of the boring parts: running the experiment and giving you insight into the training progress.

Making it easier to work with your data

We started as an automated annotation tool three years ago. Today, we are (probably) the fastest annotation tool still on the market. But why is this interesting in the context of Model Playground?


As we already handle our customers’ data when they create training data (either on our servers or by connecting to your storage) we now let them run their experiments in the same environment. This means no data loaders or extensive import/export jobs are needed anymore, saving you time to do more meaningful work.


We include data versioning to ensure that models are trained on the correct data, no matter who is setting up the training - so you need no more spreadsheets to track this. With Hasty, you now have a single source of truth for your vision AI model.


Allowing for rapid experimentation

We have taken our time to ensure that you have plenty of architectures, optimizers, schedulers, etc., on launch. As a team with many ML engineers ourselves, we didn’t want to limit our users to a few select options because we think it’s important to experiment and figure out what works as you go; without spending hours getting architectures to work or reading through configs to figure out what’s missing to get that new optimizer to run.


We’ll continue adding more parts to our Model Playground, but as of today, we have the following building blocks available:



We created a suite of building blocks for you to rapidly set up experiments and play with many different configurations. No more spending hours getting architectures to work or reading through configs to figure out what’s missing to get that new optimizer to run.


Get to the ideal setup for your use case faster, and focus on producing results.


Is that one killer building block you need missing in the infographic above? Contact our Product Manager Kristian at kristian@hasty.ai, and he’ll make sure we look into it as soon as possible. It doesn’t take long to add things to our framework! ;)


Creating symbiosis between data and models

Data creation and curation tend to be the most expensive parts of any AI project. Today, a lot of the work is done either manually or using pre-trained models to pre-label data. These come with drawbacks.


Firstly, why is it so hard to use your models to help with annotation? You have something that works. So, how do you get that model to assist with data creation and curation? Sure, you can pre-label the next batch of data, but this will scale any issues your models have. We have seen teams spend more time cleaning this up than they would have spent labeling 100% manually.


We think we have a better solution. Our annotation environment uses a human-in-the-loop approach so that you only take the good predictions from the model and ignore the bad ones. Our experience shows this results in much faster data creation.


With the launch of Model Playground, we also offer you the ability to replace our default models with your custom ones. When you are happy with how a model performs, you can switch models in two clicks. That way, what you do on the model side directly benefits your data workforce.


Additionally, by using your model for annotating and curating, you will get a much better idea of what works and what doesn’t. Deploy your model. Annotate a hundred images and see how the model performs on new data. Creating this feedback loop between data and model will allow you to find systemic issues way before users experience them.


We are running an alpha to bring your models directly into our environment. If you want to have access or know more about that, Tobias (tobias@hasty.ai) is your guy.


Removing the need for complex training pipelines and tricky integrations

One of our investors, Issac J Roth at Shasta, had an excellent explanation for why he wanted to invest in Hasty. Having been around the startup scene in San Francisco since the ’80s, he saw that what many teams are struggling with today when working on ML projects are the same issues software engineering teams had before the boom of DevOps solutions. In short, you have to be your own plumber.


Today, you don’t have to build your own code version control – you use some flavor of GIT. You don’t need to write an integration by hand to run tests and deploy – you use some continuous integration software. However, if you are doing ML, it feels like DevOps used to feel back in the day.


For most projects we’ve seen, this means building and integrating many different software and services into a training pipeline. Beyond being a massive engineering task, this also comes with a host of problems. What should be a small change takes a long time to refactor as you’ll need to fix different parts of the pipeline. Integrations stop working as APIs change without notice, and you end up having to go through logs to figure out what exactly went wrong this time.


With the launch of Model Playground, this is no longer necessary. By combining data cataloging, annotation, and curation with model building and experiment tracking features, you no longer need to build that significant pipeline – as everything is in one tool.


(Of course, there are still bits and pieces that you will have to code yourself for the actual application, but we hope that we can remove a sizable pain from your AI development processes.)


Helping you get to production - whatever that means for you

So you can train a model and use it to speed up data work, but that’s not the end goal. You want to get a model into production (or, at least, your boss does).


At this point, you might be wondering, “But how exactly are they going to limit me in how I use my model?” That’s a valid question. Many solutions out there offering something similar to Model Playground are walled gardens. They calculate that if they have a working model in their environments, they can keep squeezing you for money. Others allow you to move your model out of their environment but place other restrictions. You might be able to export it but only to their formats.


We didn’t like either approach when we were on the other side of the table, so we decided to go another way.


In Hasty, you can export any model trained in Model Playground in TorchScript and ONNX formats (TensorFlow is in development). We support ARM, x86, Texas Instruments, Jetsons, and a smattering of other processors. This means that any model you train with us can be used on most hardware and in most environments.


Replacing default models and exporting your own models can be done in a couple of clicks

If you prefer a managed hosting solution, we provide that too. You can leave the model with us and access it via API. We only charge you for usage to make it as fair as we can, not on upkeep. Therefore, you only pay when you and your users get something of value back.


In keeping with the metaphor, our garden is not walled, but rather full of attractions that we believe will make you WANT to stay there. We managed to convince our internal AI team - now we hope to do the same to you.


To benchmark different experiments against each other, you can use our model dashboard

Making vision AI affordable

Three years ago, the founders of Hasty were all working for a digital consultancy serving German Hidden Champions (the Mittelstand). We worked on many vision AI projects where we had to deliver production-ready models to European manufacturers, covering every use case from quality control of metal parts to automated urine analysis (yes, that’s a thing).


Back then, we hit many roadblocks when using new software. We would get quoted a five-figure for a PoC from an annotation tool (don’t even get us started on model hosting or upkeep!). So we were forced to build a lot of software ourselves that we could have bought if the pricing model had been more reasonable.


For many, this is still the case. In no space do you have such opaque pricing as in ML. Most ‘pricing pages’ have a “contact us” button. One company even told us, “what’s wrong with arbitrary pricing?” Having experienced it first-hand, we understand the frustration and we hope to alleviate some of those pains.


Our philosophy has always been that we charge you when we deliver something of value. For us, that means AI automation or heavier computation where you pay-per-use. Our pricing model scales with your use case.


Give it a try and let us know what you think

Starting today, anyone with a Hasty account will also receive access to Model Playground. To get started, you only need to go here:


If you don’t have an account yet, you can sign-up for free and try us out. It’s easy to import your data (or use the demo you get when signing up) and then take Model Playground for a spin.


If you want to know more about Model Playground, you can also check out our documentation or look at the video we are releasing tomorrow, where we do a walkthrough on how to train a model in Hasty. We also built a wiki to explain all ML terms needed to run Model Playground.


If you have any questions or feature requests, feel free to direct them either to me ([email protected]), our Head of Strategic Projects Tobias ([email protected]) or our Product Manager Kristian ([email protected]). If you are really eager, why not email all three.