Another way to approach this is to use the cached query stats to estimate it. This has a similar caveat to most suggestions on this topic, which is that it's not guaranteed to be completely accurate.
Having said that, if the applications each have their own database, you can get some idea by running a query like the following:
DB_Name(DatabaseID) AS [DatabaseName],
SUM(total_worker_time) AS [CPU_Time_Ms]
FROM sys.dm_exec_query_stats AS qs
CONVERT(int, value) AS [DatabaseID]
WHERE attribute = N'dbid'
) AS F_DB
GROUP BY DatabaseID
ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],
CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
ORDER BY row_num
That will show you how much CPU time each database is responsible for, relative to total use on the server, based on queries still in cache.
It should be noted that this is percentage of total use, so even if SQL Server is only pushing the server's CPU to 10%, these numbers will still add up to 100%.
Combining the output of the above query with overall CPU utilization on the machine should give you a decent rough idea of how much each database is responsible for.
Hope this helps!