As every vendor adds the AI sticker to their product lineup, day-to-day practicality of using these tools remains. Another important factor is the trust in the output.
Having a tool ‘build’ a configuration might seem appealing from the offset, but what if that time saved upfront is spent parsing and editing the configuration for errors and manually rectifying each one. Is that really saving time?
What if you shorten that phase of trust but verify and create an outage scenario? Was any time really saved?
That doesn’t mean these tools are useless and should be put aside for traditional methods of manually writing every configuration.
We’ll look at an example of using multiple models to ensure we perform the two main factors:
Building the configuration & Verifying the configuration.
Why two models? Simply because we don’t want to be in a position of correcting our own homework. AI models are notorious for being confident even when they’re wrong. The onus is completely on the end user to perform the validation. However, one model can verify the work of another so we can shorten that time further.








