Occasionally at a previous job, we ran into a problem with our databases where a client DBA had modified the schema, a stored procedure or a function somehow, without telling us, and then some changes we made would overwrite theirs. The way we usually worked around that was by carefully (manually) comparing the production schema to our development version, figuring out which changes conflicted with theirs and (again, manually) merging them, then testing, then deployment.
This is time-consuming, error-prone and very annoying. Distributed version control offers a better option.
Here's how it would work. We would keep a copy of scripts for our schema (exported by SQL Server Management Studio) in a Git repository branch. We would keep another branch for the production system. Before deployment, to determine if there are any changes by the DBA that need to be incorporated, we do another export on that end, check it into their Git branch, then attempt a merge with the development branch. Conflicts will be automatically detected and highlighted by Git, so they can be very easily seen. Once any conflicts are dealt with, we generate our deployment scripts and update the production database.
I suppose this doesn't really have to be done with distributed version control, as such. As long as there are separate, comparable repositories for the two environments that allow changes and conflicts to be detected automatically and merged, then you have a better solution than manual comparisons. Also, this isn't really a novel use of version control, either. This is exactly what version control was designed to do, except that one person doesn't have to be aware that the version control system even exists, but everyone still benefits from it.
Mokalus of Borg
PS - I am currently using this source control technique to help with my church's website.
PPS - It's a much smaller scale problem, but the solution holds up nicely.