top of page
Writer's pictureAbhay Kulkarni

Prediction end point - streaming or batch?

We have discussed a few aspects of defining a prediction end point. Here is one more. Always work with your business users to understand the various ways that API will be called at various points of consumption. Will it be called for every individual record, or will it be a called for a batch of records? If individual, will that be real time, near real time or offline? What would be the frequency of prediction? If batch, can it be a background process? How many records can be expected in that batch? Is there a variation in that number? What would be the frequency of batches? What would trigger the prediction? How much time does the prediction API have before it returns results? What happens if some of the records in that batch cannot be predicted? Is there a summary of predictions required? Is there some other sort of aggregation of prediction results required? Please note that having answers to this questions will help you refine the API, but does not mean that the API has to take on all the work. For example, aggregation, if a requirement, might be possible in the application after getting the results of prediction. That would potentially make the prediction API more reusable. Understanding all of above can help you take one more step towards practical AI. #abhayPracticalAI #machinelearning #artificialintelligence

16 views0 comments

Recent Posts

See All

Defining prediction API - some tips

The end result of a requirement gathering exercise for AI is often the end point definition for the desired prediction rule. Defining the...

Setting minimum acceptance level for AI

A question I have been asked often, “Should we aim for 70% or 80% or 90% accuracy with our prediction rule?” Like most questions in life,...

Comments


bottom of page