So you have the prediction rule available. The model has been trained, and you have reasonable amount of confidence in it’s estimated performance. Should you go live with all users, or roll it out as a pilot or in phases? The answer, as in most cases, is ”depends”. If the decision for which the prediction rule is being used, has low risk when influenced by the prediction; going full blast might be ok. So that to me is the key point. Evaluate the risk of the decision when taken under the influence of prediction. If there is enough support to reduce the risk of making the wrong decision, you might want to go full throttle. But consider a variety of other aspects as well. Example, even if the risk is low, is this AI going to change the method of working? Do we have the correct training in place for our users to understand the new to-be process? Does this prediction work better for a certain subset of data? Is there a certain set of data which you know is good, and hence the right place to start implementing AI. What is the impact of this AI project not getting adopted on other initiatives? Should you roll this AI out to a smaller set of folks who you know will give you the correct feedback, and/ or will champion their new way of working to other colleagues? Think through such aspects, and decide on how you want to pilot / phase out your AI go live. Rolling out AI in phases might be one of those steps that take you closer to practical AI. #abhayPracticalAI #artificialintelligence #machinelearning
top of page
Search
Recent Posts
See AllWe have discussed a few aspects of defining a prediction end point. Here is one more. Always work with your business users to understand...
160
While defining the prediction rule end point, I have often seen the spec designed such that the data science behind it becomes logic...
180
The end result of a requirement gathering exercise for AI is often the end point definition for the desired prediction rule. Defining the...
100
bottom of page
Comments