What could go wrong? If the way MyISAM changed the format of how something is stored in their files, it will break on 5.1. This will render your stats database on the master corrupt and unusable.
Even if the initial load went fine, it doesn't mean something won't come along later with the result listed above...a corrupt database. It's a risk, and if it can be avoided, I personally would do so.
And in your case, it can be avoided by dumping the data using mysqldump
quickly and then load into the 5.1 instance.
Running TRUNCATE PARTITION
works fine
DROP DATABASE IF EXISTS partest;
CREATE DATABASE partest;
USE partest
CREATE TABLE rolando (
id int not null auto_increment,
LogDate DATE,
PRIMARY KEY (id,LogDate)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
PARTITION BY RANGE (DAYOFYEAR(`LogDate`))
(PARTITION p1 VALUES LESS THAN (8) ENGINE = InnoDB,
PARTITION p2 VALUES LESS THAN (15) ENGINE = InnoDB,
PARTITION p3 VALUES LESS THAN (22) ENGINE = InnoDB,
PARTITION p4 VALUES LESS THAN (29) ENGINE = InnoDB,
PARTITION p5 VALUES LESS THAN (36) ENGINE = InnoDB,
PARTITION p6 VALUES LESS THAN (43) ENGINE = InnoDB,
PARTITION p7 VALUES LESS THAN (50) ENGINE = InnoDB,
PARTITION p8 VALUES LESS THAN (57) ENGINE = InnoDB,
PARTITION p9 VALUES LESS THAN (64) ENGINE = InnoDB,
PARTITION p51 VALUES LESS THAN (358) ENGINE = InnoDB,
PARTITION p52 VALUES LESS THAN MAXVALUE ENGINE = InnoDB);
ALTER TABLE rolando TRUNCATE PARTITION p4;
Here is the output
mysql> DROP DATABASE IF EXISTS partest;
Query OK, 1 row affected (1.55 sec)
mysql> CREATE DATABASE partest;
Query OK, 1 row affected (0.00 sec)
mysql> USE partest
Database changed
mysql> CREATE TABLE rolando (
-> id int not null auto_increment,
-> LogDate DATE,
-> PRIMARY KEY (id,LogDate)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
-> PARTITION BY RANGE (DAYOFYEAR(`LogDate`))
-> (PARTITION p1 VALUES LESS THAN (8) ENGINE = InnoDB,
-> PARTITION p2 VALUES LESS THAN (15) ENGINE = InnoDB,
-> PARTITION p3 VALUES LESS THAN (22) ENGINE = InnoDB,
-> PARTITION p4 VALUES LESS THAN (29) ENGINE = InnoDB,
-> PARTITION p5 VALUES LESS THAN (36) ENGINE = InnoDB,
-> PARTITION p6 VALUES LESS THAN (43) ENGINE = InnoDB,
-> PARTITION p7 VALUES LESS THAN (50) ENGINE = InnoDB,
-> PARTITION p8 VALUES LESS THAN (57) ENGINE = InnoDB,
-> PARTITION p9 VALUES LESS THAN (64) ENGINE = InnoDB,
-> PARTITION p51 VALUES LESS THAN (358) ENGINE = InnoDB,
-> PARTITION p52 VALUES LESS THAN MAXVALUE ENGINE = InnoDB);
Query OK, 0 rows affected (2.60 sec)
mysql> ALTER TABLE rolando TRUNCATE PARTITION p4;
Query OK, 0 rows affected (0.25 sec)
mysql>
OK, fine. It works. However, doing this is logically unsafe. Why ? The partitioned table is based on DAYOFYEAR(datetime_col))
. Since each partition contains a week for multiple years, running alter table t1 truncate partition p4;
would wipe out week 4 from multiple years.
OK, let's try dropping and adding the partition
mysql> ALTER TABLE rolando DROP PARTITION p4;
Query OK, 0 rows affected (0.41 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> ALTER TABLE rolando ADD PARTITION
-> (PARTITION p4 VALUES LESS THAN (29) ENGINE = InnoDB);
ERROR 1481 (HY000): MAXVALUE can only be used in last partition definition
mysql>
This makes sense since MAXVALUE
already exists in the definition. This prevents you from adding (appending) a partition if MAXVALUE
is present.
While you could play games with ALTER TABLE ... REORGANIZE PARTITION
, the complexity isn't worth it and you won't achieve the result of wedging in a partition.
Rather that explanining further partition machinations, just do the following
- Make sure an index exists on the date field
- Perform the
DELETE
SQL already mentioned
An alternative would be to create a table with a weekly partition then drop the exact week.
Best Answer
That error is not about your
RETURN
; it's this:When the cursor runs out of data, this error is thrown. This could be because the
SELECT
query declared in the cursor didn't find any rows, or because it did find rows but as you iterated through it, you read past the last row. If nothing happened when the cursor ran out of data, how would you ever break out ofread_loop
?You need to trap the error with a handler, which will prevent execution of the procedure from terminating, and set a variable you can use to break out of the loop.
At the top, with the other variables:
Then, after declaring but before opening the cursor:
Inside your loop:
This should solve your immediate problem.
Other observations:
Your function modifies the database, so at the top, you should have this:
...although, really, this would be more appropriately written as a stored procedure rather than a function. MySQL supports non-deterministic stored functions that modify the database, you there are potential issues with the binary log used for point-in-time recovery or replication that can pop up.
To process all of the data rather than one value at a time, you could use nested cursors -- the outer cursor fetching whatever data you're feeding into this function and the inner cursor -- inside an additional
BEGIN
/END
block doing the logic you've got, above with values fetched in the outer cursor. This requires resetting of thedone
value back to 0 after each iteration of the inner block so that the other block doesn't stop prematurely.The inner subquery is unnecessary, since the cursor could be declared with...
...but, in fact, this entire function seems like it may be unnecessary.
In SQL, it's typically best if you you tell the database what needs to be done ("declarative"), as opposed to how to do it ("procedural").
Thinking too procedurally is an SQL antipattern.
I don't know your column names in the 'login' table, so you'll need to correct those, but consider the following query. As far as I can tell, it does exactly what the function does, for all logins and all matching records.
Done.
For future maintenance of the "login_apps" table, a trigger on the "records" table could look up the related login and write the new entries into "login_apps" every time "records" has an insert or an update... or when the stop time is logged on the "login" table, an update trigger could read "records" and insert into "login_apps".
Final thought, it would probably be better design if you stored the program ID rather the program name in the login_apps table... unless the "records" table actually has an auto_increment primary key, in which case, that would be the value you should really store in login_apps.