Dynamic Linking is an infinite source of complexity, security leaks, incompatibility, inreliability etc. and yet many perceive it as "good" or even "necessary". Let's have a look at common arguments in favour of dynamic linking.
Dynamic linking saves memory.
Actually, yes, but only if you wave your hands enough. Contrary to some rumours and propaganda good static linking (Plan 9) only compiles in code actually needed. Nothing more and nothing less. Dynamic linking is a relatively new concept: Good ol' UNIX didn't need it until the introduction of X Windows and it could run on a 16-bit PDP-11/40 with an incredible 256 KB of RAM, max! Given today's machines which usually have much more than 128 MB (that's 500 times more!) of RAM, the relevance of this claim becomes weaker and weaker. Also note that UNIX shares text pages, i.e. if two processes are actually the same program, the code is not duplicated in memory. And all space advantages are completely compensated by the disadvantages.
When Sun reported on their first implementation of shared libraries, the paper they presented (I think it was at Usenix) concluded that shared libraries made things bigger and slower, that they were a net loss, and in fact that they didn't save much disk space either. The test case was Xlib, the benefit negative. But the customer expects us to do them so we'll ship it. -- Rob Pike
This probably does not apply to GLIBC, but even dynamically linked GLIBC applications are larger than statically linked Plan 9 applications …
Dynamic linking allows fixing bugs in libraries / updating libraries in one place.
- It also allows introducing new bugs whose cause might be hard to find.
- Programs are not self-contained, complicating debugging and deploying.
- Versioned symbols don't allow fixing bugs in one place.
- Most programs don't benefit from library updates.
Dynamic linking is secure.
Haha, good one. Few have provided a viable model how dynamic linking is supposed to be secure. But many exploits are actually possible just because of dynamic linking, just look at your favourite exploit site.
Dynamic linking is fast.
This is even more ridiculous than the last one.
aiju@toshiba ~/tmp $ gcc test.c && time ./a.out Hello World ./a.out 0.00s user 0.00s system 0% cpu 0.002 total aiju@toshiba ~/tmp $ gcc -static test.c && time ./a.out Hello World ./a.out 0.00s user 0.00s system 0% cpu 0.001 total
A factor of two. And this sums up, especially with shell scripts. (among others
fork()is greatly slowed by dynamic linking)
Dynamic linking is portable.
It stops being funny. Dynamic linking is a main reason for the inportability of software, because of library version problems. Many vendors have started shipping their software with all required libraries, which basically combines the disadvantages of static and of dynamic linking.
Dynamic linking is simple.
Don't be silly. IMHO much unnecessary complexity in Linux is basically the result of dynamic linking.
Now my all time favourites:
- Dynamic linking is more compatible with <insert license here>.
- GLIBC features need dynamic linking!