As AI becomes mainstream, you can expect a lot of uptake of AI across multiple use cases in your organization. It is not unusual to find use of a variety of packages, multiple algorithms and different approaches for similar problems across different consumption points. Reasons could be varied - ranging from the AI vendor that the data scientist is comfortable working with to pure disconnect between multiple data science teams. AI is based on machine learning. So it is only befitting to say that try and keep learning uniform across your enterprise. As an example, if you are training word corpses for different ML pipelines, try and reuse the same corpus (assuming it fits the purpose). This will not only reduce the maintenance overhead, but also provide more consistency in interpreting natural language. Models and corpses are not the only reusable components. Think of other parts of this machinery. Curated data can be repurposed for training multiple models. A library of reusable features and transformations can lower the cost of building and maintaining models. Also, reusable pipelines or parts of pipelines can help reduce maintenance costs. Another big benefit of reusability is that multiple models will benefit from the same learning accumulated on these reusable components over time. And that’s just one more tip towards building practical AI. #abhayPracticalAI #artificialintelligence #ai
top of page
Search
Recent Posts
See AllWe have discussed a few aspects of defining a prediction end point. Here is one more. Always work with your business users to understand...
160
While defining the prediction rule end point, I have often seen the spec designed such that the data science behind it becomes logic...
180
The end result of a requirement gathering exercise for AI is often the end point definition for the desired prediction rule. Defining the...
100
bottom of page
Comments