I'd compare the data using, say, Red Gate SQL Data Compare in addition to DBCC.
Comparing all data in all tables row by row means checking every populated page.
However, for larger databases this will a long time, so you can run aggregates over columns (say AVG or CHECKSUM_AGG). I use this to quickly compare results but it would scale and force data to be read.
Currently I am running my business on 2x SQL servers which have 38 GB of ram at all
As well as the replication strategy to improve availability the though the replica taking over if the primary fails, you must factor in a god backup strategy - in what follows I'm going to assume that you already have this (if not, you need to factor it into your costings and a really good backup solution for your business critical data is not something you should try buy/create cheaply).
If you are happy with this arrangement in terms of resilience and general performance, and you are just upgrading the servers mainly to replace old hardware, then go with option 1 and upgrade the current design. Remember that if you build a different infrastructure you may run into problems you've never seen before and if you do they may turn up at the least convenient time for your business. If you don't have the knowledge/experience to fix things fast (and it isn't available in anyone else in the company you can call on) this could be very expensive in terms of business lost due to downtime and paying an expert to fly in and help.
Do I need SSDs beside plenty of RAM
That very much depends on your applications write patterns. Given you have ~80Gb of data you might find all your common working set could be held in RAM most of the time so for read heavy loads you will see little or no difference between the SSDs and spinning metal for day-to-day operations as the drives won't be touched for most reads. That data held in memory will be lost in some circumstances (such as when SQL server is shut down for what-ever reason) and it will take a while for the working set to get loaded and at this point the SSDs will help quite noticeably. For writes data will still of course need to hit the disk, so for write heavy loads the SSDs will help.
Do remember that Standard Edition will only use up to 64Gb of memory per instance (see http://msdn.microsoft.com/en-us/library/cc645993.aspx#CrossBoxScale) so you may not see all that 96Gb get used in either box.
I now have 1 main database (80GB-DATA and 20GB of SQL Server log)
I can have at TWO SQL servers 8 X 250 GB ssd SATA disc
or HP storage with 10~14 pieces of 600GB 10K SAS disc in storage
Am I reading this wrong, or are you expecting your data size to increase a fair bit over the active service life of the new hardware? In RAID10 those SSDs will give you ~1TB of space which is considerably bigger than the ~100Gb you are using now, and that large collection of spinning metal even more so (3Tb+ unless you configure a redundancy level over 2:1), so you might be over-spending here for your near-/mid-term needs.
Best Answer
Because you are working with Virtual Environments, you can just do a storage migration. This will probably cause some performance problems so I would do the following:
I am aware that it takes longer to migrate storage when the VM is running. It may be quicker to shutdown the Secondary, then move the storage, then start it back up again, failover and then do the other server.
There is no need to remove servers from the Availability Group. AAG's are designed to be able to shutdown a node for a period of time.
If you shutdown or change to async, you'll need to make sure there is enough storage space to keep the transaction log until the server is running at full speed again (or comes back online if you shutdown).