Create a sound classification AI model to identify movement voice commands

1. Login

Make sure to visit: app.cogniflow.ai
Then log in to the platform with your user and password or sign up for free.

2. Dashboard

The home page shows all the experiments that you have created.
You can filter by experiment type (image, text, audio) and search



3. Create experiment I

To create the new experiment, you have to click on the "Create new experiment" button and then select the experiment type: Audio Based



4. Create experiment II

Then you define a name and description for your experiment. Optionally, you can upload an image to identify your experiment.



5. Create experiment III

Now, it’s time to upload the data that will be used to train the platform on the main patterns of the problem you want to solve. In this case, we need a zip file containing audio files organized by folders. Each folder represents the movement we want to identify/classify.



You can download the dataset here: Training and Validation



6. Create experiment IV

Optionally, you can upload a second zip file containing specific audios to validate the models trained. If not provided, the platform will automatically separate 20% of the training audios for this goal.



7. Create experiment V

In the last step, if you are an expert user, you can optionally modify the default experiment’s configuration. If the default configuration is used, the platform will use the best candidate models for your experiment type



8. Experiment page I

Congrats! You have completed all the required steps to create the experiment: you have defined your problem and provided the data.

You’ve finished! Just wait until the platform finds the best solution (model) for your problem!



9. Experiment page II

If you want, you can go to the Dataset tab, and see some stats about your data. We keep your dataset for 30 days in case you want to make changes and upload a new version (this could be helpful in case the results are not good)



10. Email notification

Once the experiment is finished and the platform have found a good solution for your problem you will receive a notification in your mailbox with a basic summary of the results and a link to the experiment page.



11. The experiment is done, your model is ready!

Once the experiment is finished, on the experiment page, you will find the best model that the platform has built for your problem. You can test it directly in the browser by clicking the “Use this model” button. You also can see other trained models by clicking the “Try another model” button

And that’s all! you have a ready-to-use service for recognizing voice commands with 92% accuracy!



12. Test model page

In the test model page, you can select an audio file from your computer or capture from the mic (if you are using a mobile device) and get model’s result immediately.

Furthermore, you can listen and visualize the waveform and spectrogram views of your audio. And last but not least! you can get the required code to integrate the model to your project or app!



13. Predict API docs - OpenAPI

If you want to, you can click on the “See the OpenAPI (formerly Swagger) specification for more details” and explore all the details related to the RESTFul API to get the model’s predictions.



14. Let's recap

In this guide you have learned:

How to use Cogniflow to build an AI model for voice movement commands recognition
The main steps:

Define your problem (audio based)
Upload your data (Audio files examples organized by folders (movement type) )
Wait until a model for your problem is ready
Check results and test the model in the browser
Use the model through the Predict API endpoint





Visit cogniflow.ai for more insights!
Was this article helpful?
Cancel
Thank you!