Your assumption that indexes are stored in memory is slightly incorrect. All data, whether from a table (heap or clustered index), or a non-clustered index, is stored on disk, in the form of 8KB pages. When SQL Server is processing a query, it will look for the pages required. If the page is not found in memory, it will be loaded in from disk and cached for a period of time, depending on how often the page is used, and how much memory pressure the server is under. As the non-leaf pages of indexes are frequently accessed, these will typically stay in memory.
In a well tuned database, it will be rare that you will be performing scans of tables. You will certainly not be scanning the past 7 years if you're only interested in the last week's data. This means that the amount of data you need to store in memory is much smaller than the entire database size.
If you do have a table scan, your 7 years of data will likely not be in memory (it may not even fit in memory!), and so the query will have to perform a lot of I/Os against the disk - the query is "running from disk". If your query is well tuned, then only the necessary pages will be read. If this is, say, 1000 pages (8 MB), then this will run quickly whether the pages are in memory, or on disk (although obviously in-memory will be an order of magnitude faster).
Even if you have enough memory that your entire database will fit into memory, you will still want to have tuned queries. Even if everything is in memory, scans create contention (blocking) and use a lot more CPU resources.