Percona Server is a fork of MySQL that came into being about 7 years ago. The Percona employees are mostly former MySQL people. They care about InnoDB performance so much that they inject 30,000 lines of code into its version of MySQL to make those enhancements possible. They have been doing this since MySQL 5.0 and are still at it today.
MySQL (eh Oracle [still hate saying it]) has been trying to keep up with it Percona via its InnoDB plugin (since MySQL 5.1.38, now fully native in MySQL 5.5)
Percona once did an aggressive comparison of MySQL versions. In most cases LEFT UNCONFIGURED, older versions of MySQL outperform newer versions. You must tune MySQL to get the most of it. Switching to Percona may not get enhancements you are looking for. Even with Percona, you must find out what performance enhancements come out-of-the-box and what is settings you need to configure otherwise.
Other posts I hope you find helpful
Asked if I could take a look and see what could be done, got a yes as answer.
That's very encouraging: you're in an excellent position to advance your career and learn something!
Some of the things you mentioned are problems, some not. I'll answer them briefly and follow with some broader advice.
procedure returns almost 2000 rows with lot of duplicate indexes. If they're really duplicates -- type, order, etc. -- extras can be dropped without loss and without fear. The DBMS can use only one, and it doesn't care which. Insert
performance improves whenever an index goes bye-bye.
NC indexes including all the columns from a table. Normal. These are known as "covering indexes". They're similar to materialized views.
lots of heap tables. Bad. Every table needs a key. Sometimes you'll be told, "it has no key". If true, the whole row can be defined as the primary key, except that an extra column is added (not in the key) as a count. "Adding" a row becomes a matter of adding 1 to the count.
really wide indexes might or might not help, cf. covering indexes, above.
databases are over 100Gb. Normal, databases grow with data.
Statistics are updated everyday for every table. Good.
There are reports that take more than hour to finish. Every problem is an opportunity.
My advice is to do your homework by reading four books, and to start small with something dramatic, by fixing the longest-running procedure (or the one you boss thinks is the biggest bugaboo).
When you suggest a change, you want to be able to speak confidently and authoritatively in the domain you're addressing yourself to: relational theory and database design, and SQL Server.
For theory, I recommend CJ Date's An Introduction to Database Systems, SQL and Relational Theory, and Database Design and Relational Theory, in that order. That will give you the information you need, and intellectual reference points when you're asked about why something should be done a certain way. For SQL Server, I can recommend Inside Microsoft SQL Server 2008: T-SQL Querying by Itzik Ben-Gan.
Reading the books is unfortunately the easy part; the hard part is applying them to the database. You may find it difficult to get a list of all the queries used with a table. Not only will make it hard to know which indexes are needed, it means splitting one table into two will require arduous testing to make sure nothing breaks (and application changes, too, potentially). The guy who wants to normalize the database has few friends because any change, even one that's correct and necessary, may create work and disruption for others.
I'd look at your bad boy, the long-running report or the one that seems to be wrong a lot of the time. Examine the query plan and think about the tables in terms of BCNF. If you're lucky, the answer might be adding an index, or possibly removing some. If not, you've got a design problem on your hands, something that's bound to involve others.
Good luck. You have your work cut out for you. The good news is you've got a place to learn some things, and permission to work smarter instead of harder. If you keep at it, one of these days you'll reduce query time not by 30% or 50%, but 99% or more. When that happens, you'll be looking at the new DBA in the mirror, so be careful what you wish for. :-)
Best Answer
You should enable slow-query-log and log-queries-not-using-indexes to in order to manifest what queries are skipping on the indexes.
This is just a personal preference, but I usually the --processlist option of mk-query-digest. I wrote a automation script for it long ago : MySQL general query log performance effects based on Percona's video about mk-query-digest being a replacement for the slow log.
Once the queries are made known, you will have run EXPLAIN on all selects. Any query whose EXPLAIN plan fails to choose an index is guilty of doing the table scans.