There are a number of options and please don't limit yourself to my answer here. In particular you may find array-native databases to be of help. My answer is going to be specifically about your questions on SQL-based databases.
It sounds to me like this is a question of geospacial information. SQL-based databases are in fact used in such fields quite well, but this is also a specialist field within databases.
Among the SQL databases in this area, PostgreSQL, with the PostGIS add-on is considered one of the best. If I were you, this is where I would start. The primary advantage of SQL is that it preserves flexibility down the road regarding re-use of your data for uses you haven't thought of yet. Doing this with good geospacial support means that you can calculate distance across a large area without worrying about the specifics of spherical trig.
Of course this only becomes a factor with very large grids. For smaller grids, where the curvature of the earth can be disregarded, PostgreSQL also has a range of geometric types including points on a coordinate system which can be used. I mention this because it isn't clear how large of an area is being surveyed and whether one can assume plane geometry or not.
Even so PostGIS may still simplify things by allowing representations and calculations on 3- and 4-dimensional geometric coordinate systems.
Also note that you say your sites are not necessarily square. In PostgreSQL one thing you can do (either using the geometric types or PostGIS) is define a non-rectangular boundary to each site so you can check to make sure a point is inside the bounds of the site before saving the measurement.
Declarative Language Impacts
This concern I think is overblown. People can and do write SQL queries as if they are a part of the imperative language of the program they are calling them from. For most of your queries it won't matter.
What people mean by a declarative language is that within a query, the structure tells the database what information you want, not how to get it. This is important when you want complex information from the database because basically it means that if you can ask the right question (and your data is valid) you will get the right answer.
The big difference that occurs however is that long SQL queries are easier to debug than long imperative subroutines, simply because one can more quickly narrow down where in the query the malfunction occurs.
How this would work
Chances are if you go this route you'd have a database and a program written in a language of your choice. The program would send queries to the database and get the answers back. You can also (in PostgreSQL and many other relational DB's) put your queries inside functions which can then be called by the application, giving more of an imperative or functional interface. Data would be stored on disk and accessed from a separate piece of software than your program. You could also connect with another program (from MS Access to pgAdmin) and run queries or generate reports.
In essence you can think of the RDBMS as a "math engine" which manages your data, and your program interacts with it to do what you need.
Best Answer
When using a databse to sort the data one needs to know the structure of the data. In other words, there must be a field delimiter and a row delimiter. Then you can import the file into a SQL-table and index it and sort it the way you want. Importing flat files is easy done with tools provided by any Database Engine, for example SQL Server Integration Services (SSIS) or MySQL LOAD.