I strongly suspect that, in the future, there will be much less imperative programming and more declarative. That is, we will spend less time, as programmers, telling our computers "this is how to do this task" and spend more time telling our computers "this is what the task is". Defining the conditions around the task at hand is a powerful mechanism, and is very well suited to, say, manipulating large sets of data in one go, or running in massively parallel circumstances.
My honours thesis at university was written in a declarative language called Prolog. It was designed to do calculations on certain types of code conditions which, in time, could have become part of a compiler that would tell you if your real-time conditions were likely to be met by your code. Complicated stuff, and we expected it to function only if it had certain solid numbers to work with. However, because of the way Prolog works, when we fed it symbolic data instead of concrete data, it managed to swallow the whole thing and still produce results. In computing terms, this is like teaching a child basic arithmetic and finding out later that they've conquered algebra all on their own, based only on your arithmetic lessons.
That's another kind of power declarative programming has. It can sometimes go beyond your expectations in perfectly valid and logical ways. It didn't make any difference to my program if it was manipulating "x" or manipulating "2". They're all symbols at some level. We need that kind of power, natural parallelism and simplicity of expression to conquer tomorrow's programming problems. We might not be able to expand today's most popular languages to handle these problems, though.
Mokalus of Borg
PS - It's likely there will still be declarative parts of these new languages.
PPS - It's difficult to avoid them.