I've not tried R but have worked a lot with Python. I think the ease of use and flexibility of data handling structures within Python make it extremely attractive to data folk.
AWS and Google allows Java, Python and Node.js as their primary languages for data technology. Google also supports GO.
AWS does allow custom languages and I believe that Rust is becoming increasingly popular.
I've found that Python is a great "get things done" language. I am more focussed on what I am actually trying to do rather than fighting the language. I like the support for TDD with PyTest and BDD with the Behave framework. Pylint and Flake8 are great for enforcing code quality rules and the "black" code formatter does help comply with that.
Python3.10 is in beta with 3.11 in alpha. 3.11 has a goal of achieving a dramatic performance boost over preceding versions. There are existing optimisation techniques that do help but are limited in scope. Cython for example.
The way I started my career meant I never had the separation between what I would do as a developer and what I would do as a DB person. For me mix and matching SQL and other languages to achieve an end is just the way it should be done.
The Python SQLAlchemy library allowed me to interact with varying SQL sources with ease and consistency.
The Pandas library is great for data handling though horribly memory inefficient. Apache Spark is a fantastic piece of engineering. Its support for the SQL language is incredible.
Pyarrow is great for writing out parquet files and translating between columnar formats. I wish parquet was an input/output option for bcp.