I don't know of a single good comprehensive solution to this.
Local development means that developers don't break other people working on their own code against a shared database. However, when you get latest code, you also need to get the database into the right state to match the code changes you've merged in. If two people are making changes at the same time, merging can be difficult since database upgrade scripts can be incompatible. Column order doesn't normally matter in a database, but it can be a bit annoying for databases to be different.
There are good tools to compare schema and data and apply changes. I would say schemas should aim to be identical. However, typically you want lookup-type data updated between developers but not regular application data (new customer types, but not new customers). Configuration data you might want updated, BUT sometimes only a subset (new printer options but not file path settings).
You would think that ideally, you could JUST rebuild your local database completely once you've each merged changes. If you've set up a bunch of test scenarios through the application for testing (instead of in the build scripts), you now don't have a script to get those changes back into the database. And this is more difficult as the scope of the database schema increases - with surrogate keys and parent-child relationships which all might have complex dependencies.
In the ideal scenario for a central database, you would have a developer DBA managed the database interfaces for the app and control that so that the exposed interfaces would consistently evolve over time and all developers about that level would see the same interface at the same time. But then you have two separate groups coordinating their different feature timelines.
Which I think goes a long way to showing why people are still attracted to a variety of other approaches which put more emphasis on code and less on the database. Ultimately, I think that just shifts the problem around.
Yes, the most secure means to for authentication is Windows Authentication. Read more about this topic and the differences here.
As for different applications having different logins, that is going to be the most secure method. The reason behind this is because each application will most likely require a different set of permissions (I'm assuming a bit here), and instead of having one server principal with a handful of permissions to suit numerous applications, it will be more secure to have each application have a login with permissions tailored to its own needs. It's all about the Principle of Least Privilege.
Best Answer
While there is no limit on the number of simultaneous connections by a single login, other than the usual @@MAX_CONNECTIONS value, using one login for multiple users or developers is generally frowned on because doing so makes it difficult or impossible to limit permissions on a per-person basis. (You may still be able to identify/contain people based on something like a workstation name, but this would require much hackery on the DBA's part, while simply using seperete logins addresses the issue.) Such "well-known" logins also have a way of getting hard-coded into things, along with the well-known password, and then developers and users become very resistant to ever changing that password. With so many ways to get onto a corporate network these days (a rogue LogMeIn running somewhere, for example) being able to turn database access off via an AD group is beneficial.
Using a single login for a web application, which can have many simultaneous users, is more common. It might not be feasible to give out SQL Server logins or AD logins to individuals for large sites (for example, facebook). In that case, there is a benefit in that IIS can use connection pooling.
I haven't worked in an environment where it was standard operating procedure to use a shared login like that for many years, although there were some smaller/older projects where a shared login was used (and sometimes abused).