• I think a lot of people are confusing automation that has clearly defined logical steps as AI or machine learning when they are clearly nothing more than automated scripts with rules. I've talked with many data scientist who have struggle when explaining how things like machine learning and so forth really work compared to what people think they work. 

    Ideally, I think one of the best ways I've heard it explained is that predefined rules by humans are actually skewing / influencing the results. True AI is going to let the data or variables tell them (actually think and learn) what to do and learn from that doing in a sense of either making the same choice again or making a different choice based on the prior results.

    For example, in marketing and it's common to for someone to say if this data shows X, then it's good and you should do Y. The problem is even though you can automate something to look for what is good and have a action for it, they can't wrap their head around if what they think is good is actually bad based on what the data is telling them. This is where the machine learning and so forth really comes in. Not letting something influence it to make the wrong decision as opposed to letting the data tell you what decision to really make without the outside influence from a human (i.e.: the programmer saying turn left if this happens, turn right if this happens).

    So yeah, a lot of this is becoming buzzwordy when in reality, it's just automation with logic defined by humans who are basically skewing the decisions based on what they think is right or wrong; not the machine.