There isn't any way to reset autoincrement such that it goes back to the beginning and just overwrites existing data. Autoincrement isn't built that way.
What you could do is update
rather than insert
. You'd have to either pre-populate every record with blank data so that every entry is always an update, or you could write a procedure which tries to read the next ID first and then either inserts or updates accordingly. Either way, you will need to track the next ID yourself instead of letting the database handle it for you (with autoincrement). When you get to the maximum value, reset the next ID to 1 and keep going.
Thinking Practically:
The largest value for UNSIGNED BIGINT is 18,446,744,073,709,551,615. Depending on what is in your record you are looking at thousands (even millions) of petabytes of data. Are you sure you aren't going to have space constraints with that much data? You may find that what you need to do is purge out data that is old enough to be discarded as you go just to make room in your database. If you do that, then you can avoid the prepopulation and just insert every time.
If you purge periodically, say every night, then you just need to wait until the gap between where your remaining data is and ID=1 is big enough that you don't have to worry about ID collisions. When that happens, reseed your ID and start inserting from ID=1 again. If you do this, then you can go back to using autoincrement.
Edit: Additional Information about reseeding...
OP mentioned in a comment that the command for reseeding an autoincrement column in mysql is of interest. This command will reset the starting autoincrement value:
ALTER TABLE table AUTO_INCREMENT = 1
I have read that this should be done with caution because it can take a long time in a large table.
See also the documentation for AUTO_INCREMENT.
Neither SQL nor the relational model are disturbed by foreign keys that reference a natural key. In fact, referencing natural keys often dramatically improves performance. You'd be surprised how often the information you need is completely contained in a natural key; referencing that key trades a join for a wider table (and consequently reduces the number of rows you can store in one page).
By definition, the information you need is always completely contained in the natural key of every "lookup" table. (The term lookup table is informal. In the relational model, all tables are just tables. A table of US postal codes might have rows that look like this: {AK, Alaska}, {AL, Alabama}, {AZ, Arizona}, etc. Most people would call that a lookup table.)
On big systems, it's not unusual to find tables that have more than one candidate key. It's also not unusual for tables that serve one part of the enterprise to reference one candidate key, and tables that serve another part of the enterprise to reference a different candidate key. This is one of the strengths of the relational model, and it's a part of the relational model that SQL supports pretty well.
You'll run into two problems when you reference natural keys in tables that also have a surrogate key.
First, you'll surprise people. Although I usually lobby strongly for the Principle of Least Surprise, this is one situation where I don't mind surprising people. When the problem is that developers are surprised by the logical use of foreign keys, the solution is education, not redesign.
Second, ORMs aren't generally designed around the relational model, and they sometimes embody assumptions that don't reflect best practice. (In fact, they often seem to be designed without ever having input from a database professional.) Requiring an ID number in every table is one of those assumptions. Another one is assuming that the ORM application "owns" the database. (So it's free to create, drop, and rename tables and columns.)
I have worked on a database system that served data to hundreds of application programs written in at least two dozen languages over a period of 30 years. That database belongs to the enterprise, not to an ORM.
A fork that introduces breaking changes should be a show-stopper.
I measured performance with both natural keys and surrogate keys at a company I used to work at. There's a tipping point at which surrogate keys begin to outperform natural keys. (Assuming no additional effort to keep natural key performance high, like partitioning, partial indexes, function-based indexes, extra tablespaces, using solid-state disks, etc.) By my estimates for that company, they'll reach that tipping point in about 2045. In the meantime, they get better performance with natural keys.
Other relevant answers: In Database Schema Confusing
Best Answer
As a general rule every table ought to have a natural key because it usually isn't a good idea to duplicate information in a table. Only add a surrogate key if and when you find there is a reason to do so.