2. ... [H]ow to address this problem ?
That depends on the specific details in this situation.
2A. How many rows are in the remote table?
2B. How much data do you need from the remote table?
2C. How much data volume is a table of matching criteria from the local table? For example, if the tables are joined on id, what is the total volume of id data that would need pushed to the remote table to allow all matching selection to be done on the remote server?
More generally, SQL will need to move the remote data locally to join to it. Thus, if the total amount of remote data is trivial -- given your system's capacity to transmit the data -- just copy all the remote data locally, then join to it there. If the total remote data is (very) large, and the list of matching is (relatively) small, then copy the id list to the remote instance to reduce the data selected from there.
Often the data volumes will fall in between those extremes. In those cases oftens it's best to create a covering index on the remote table for this specific selection. Now, in general, I'm not a fan of the too-common approach of creating gazillions of covering indexes, one for (almost) every query (*), but for remote usage, covering indexes can be especially performant.
(*) because in the vast majority of cases, focusing on setting up the best clustering of the table yields much better overall performance and lower resource usage than scads of covering indexes on a by-default identity clustering of the table.
SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: "If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them."