It sounds very doable and the way you approach it depends on the volume of data.
We extract a lot of data from Oracle using the Attunity drivers, because the built in drivers are too slow. Here's an article on connecting to MySQL.
Assuming you don't want to extract the entire data set every time, you need to come up with a way to identify new or changed data. If there is a reliable datetime column then you can extract only data that has changed since the last time you extracted. In the past I have done this, but it was a last edit time and the source system liked to backdate the time, so I extended my time window back an hour from the last time I ran the extract. It's better to extract too much data (within reason) and filter it later, than risk missing changes.
If there is no way to identify new data or changes are not an issue, but there is a unique identifier of some kind, you can use a lookup in a dataflow with a conditional split. Direct the "no match" output towards your import table, and direct the "match output" to a row count variable. This allows you to import new data and count old data to audit the totals, if applicable.
Once you have your data in an import table you can look for updates and changes and transform the data to the types you need for the API.