top of page
Writer's pictureAbhay Kulkarni

Test AI with known data

So the data science is done, programming completed and the prediction API ready. It might also give you an estimate of expected accuracy and other success measures. This it would do using some techniques built into your data science pipeline like cross fold validation. But does that give you the confidence that this prediction rule will work for you. Sure the new data pipeline is new, and you as a business user might not know enough about data science to feel confident. So depend on what is known to you for testing your AI. I would suggest identifying two things: a business user who understand the data being predicted upon, and a dataset that this business user understands well. While selecting the dataset, ensure that it covers all known scenarios, is preferably not a dataset that was used for training the model, and a dataset that is good i.e. verified to be accurate (search my blog page for “good data”). Do save the dataset and its results. As the pipeline/model changes, testing on the same dataset will help you better compare different models. Test your AI with known data and you take one more step towards practical AI. #abhayPracticalAI #ArtificialIntelligence #AI

17 views0 comments

Recent Posts

See All

Prediction end point - streaming or batch?

We have discussed a few aspects of defining a prediction end point. Here is one more. Always work with your business users to understand...

Defining prediction API - some tips

The end result of a requirement gathering exercise for AI is often the end point definition for the desired prediction rule. Defining the...

Comentarios


bottom of page