i appear to be incorrect in that the conditional split is still creating the file, sorry about that. i was also thinking off the top of my head.
what if you still go down the path of using the Row Count task in your data flow to setup your variable and then the next step would be to write out the records in the dataflow to the flat file. (this will always create the file, yes)
after that, place in a script task that has a precedence constraint to check the row count variable to look for to be zero. if so, run the script task that contains script which will delete the file.
not knowing the size of your files, but an (UGLY) alternative, if the files are tiny, have your dataflow return into a reccordset destination, which is simply an object variable. then use that variable in a for each loop container to execute a data flow which would have something as simple as a ole db source to say "select 1 as dummy" field (all data flows must have a source object). then you can use the derive columns task to create new fields for your dataset based off of looping variables that you are placing your results into (for each record). then, insert that data flow into a destination file, ensuring that the overwrite property is not set. if there are no records, the For Each loop would not execute.
PLAN B IS UGLY, I WILL ADMIT THAT. I would much rather use the script component to just delete the file if the record count variable equals 0.