Prediction end point - streaming or batch?
- Abhay Kulkarni
- Apr 24, 2020
- 1 min read
We have discussed a few aspects of defining a prediction end point. Here is one more. Always work with your business users to understand the various ways that API will be called at various points of consumption. Will it be called for every individual record, or will it be a called for a batch of records? If individual, will that be real time, near real time or offline? What would be the frequency of prediction? If batch, can it be a background process? How many records can be expected in that batch? Is there a variation in that number? What would be the frequency of batches? What would trigger the prediction? How much time does the prediction API have before it returns results? What happens if some of the records in that batch cannot be predicted? Is there a summary of predictions required? Is there some other sort of aggregation of prediction results required? Please note that having answers to this questions will help you refine the API, but does not mean that the API has to take on all the work. For example, aggregation, if a requirement, might be possible in the application after getting the results of prediction. That would potentially make the prediction API more reusable. Understanding all of above can help you take one more step towards practical AI. #abhayPracticalAI #machinelearning #artificialintelligence
Bình luận