The standard flat file source can skip n rows and deal with a header row. It can't skip rows 2, 3, n-1, and n. There is one possible solution by re configuring the flat file souce, otherwise you may have to write actual code that can open a file, read it, and parse it.
If rows 2, 3, and the last two rows throw errors because they can't be parsed into the correct number of columns, add an error output to the flat file source that redirects these rows somewhere else. You have to set the error action to Redirect instead of Fail. The other rows should be read normally.
You could use a script source task that has special handling for the four extra rows (ignore them or do something else with them if the data is useful). It has to open the file, read and parse each row, and load buffer rows with the data field values. The rest of the dataflow can use normal components.
You could script a task that modifies the input files by deleting rows 2, 3, and the last two rows (or makes a copy of the input file excluding these four rows). Then use a dataflow with the regular flat file source to read it.