(I'm putting this in an answer, as it's way too long for a comment.)
We have a scenario similar to yours for our bug tracking system. We use it internally, of course, but customers can also submit issues through a page we created on our customer SharePoint site.
What we decided to do was host the database and website only at the office and provide external access from there (which we were already doing for some of our SaaS customers). If the internet totally bombs out (rare), it's more important that we can continue to work than for our customers to be able to submit new issues.
In your scenario, I don't know how critical the data is, how much data there is, or how important it is for external users to be able to write data.
Perhaps you could consider using a database at the alternate location as a read-only secondary, but direct all writes to the primary. While this will probably involve some application changes to separate read-only and read-write connections, this type of solution might be enough to satisfy the requirements for the small amount of time the office internet is down.
Regardless, I stick by my recommendation to not mix MySQL/SQL Server if you can avoid it. IMO, you'll be better off long-term by directing resources into proceeding with the existing migration plan, and hold off on developing a more robust replication solution until that stage of the project is complete.
Also, definitely try your best to avoid any master-master replication scenario. These can be highly non-trivial to configure and support at the best of times. The $ and time that will be spent developing and debugging a solution involving master-master heterogeneous replication will be astronomical, and probably won't ever work correctly 100% of the time (actually, probably nowhere close to that). Not that a built-in homogeneous replication solution will be perfect either, but at least in that case, you can call for customer support if something blows up and you don't know how to fix it; if you roll your own solution, you're on your own.
I can't point to the cause but I believe the default rows-per-batch for a BULK INSERT operation is "all". Setting a limit in rows could make the operation more digestible: that's why it's an option. (Here and going on, I'm looking at the Transact-SQL "BULK INSERT" documentation, so it could be way off for SSIS.)
It'll have the effect of splitting the operation into multiple batches of X rows, each operating as a separate transaction. If there's an error, the batches that finished will remain committed into the destination table, and the batch that was stopped will rollback. If that's tolerable in what you're doing, i.e. you can re-run it later and catch up, then, try that.
It's not wrong to have a partition function that puts all current inserts into one table partition, but I don't see how it's useful to partition at all with partitions in the same filegroup. And using datetime is poor, and actually kind of broken for datetime and 'YYYY-MM-DD' without explicit CONVERT formula since SQL Server 2008 (SQL may cheerfully treat this as YYYY-DD-MM: not kidding: don't panic, just change it to 'YYYYMMDD', fixed: or CONVERT(datetime, 'YYYY-MM-DDT00:00:00', 126), I think is it). But I think using a proxy for date value (year as int, or year + quarter) to partition on will work better.
Maybe it's a design copied from elsewhere, or duplicated across several datamarts. If this -is- a true datamart, a dump from the data warehouse to give department managers some data to play with, that isn't (by you) being sent on elsewhere, and probably read-only as far as data users are concerned, then, it seems to me that you could remove the partition function -or- change it to explicitly put all new data into the fourth partition no matter what, and no one would care. (Perhaps you should check that no one cares.)
It feels like a design where the plan is to drop the contents of partition 1 some time in the future and create another new partition for more new data, but it doesn't sound like that's happening here. At least it hasn't happened since 2013.
Best Answer
So close. It'd be
The Ternary Operator looks like
(condition) ? true : false
With the notable exception of the ForEach Enumerator, nowhere else in SSIS expressions are you able to use an assignment like time = "00:00:00". It'll basically be implied by whatever column or variables it's being assigned to.If the data type of
time
column is actually detected asDT_TIME
the above expression is likely to break because that would be outside the allowable domain for DT_TIME values but it'd actually break/fail/error at the source component level (OLE DB/ADO NET/ODBC source)