Dynamic languages are widely used for different kinds of applications including rapid prototyping, Web development and programs that require a high level of runtime adaptiveness. However, the lack of compile-time type information involves fewer opportunities for compiler optimizations, and no detection of type errors at compile time. In order to provide the benefits of static and dynamic typing, hybrid typing languages provide both typing approaches in the very same programming language. Nevertheless, dynamically typed code in this languages still shows lower performance and lacks early type error detection.
The main objective of this PhD dissertation is to optimize runtime performance of dynamically typed code. For this purpose, we have defined three optimizations applicable to both dynamic and hybrid typing languages. The proposed optimizations have been included in an existing compiler to measure the runtime performance benefits.
The first optimization is performed at runtime. It is based on the idea that the dynamic type of a reference barely changes at runtime. Therefore, if the dynamic type is cached, we can generate specialized code for that precise type. When there is a cache hit, the program will perform close to its statically typed version. For this purpose, we have used the DLR of the .NET framework to optimize all the existing hybrid typing languages for that platform. The optimizations are provided as a binary optimization tool, and included in an existing compiler. Performance benefits range from 44.6% to 11 factors.
The second optimization is aimed at improving the performance of dynamic variables holding different types in the same scope. We have defined a modification of the classical SSA transformations to improve the task of type inference. Due to the proposed algorithms, we infer one single type for each local variable. This makes the generated code to be significantly faster, since type casts are avoided. When a reference has a flow sensitive type, we use union types and nested runtime type inspections. Since we avoid the use of reflection, execution time is significantly faster than existing approaches. Average performance improvements range from 6.4 to 21.7 factors. Besides, the optimized code consumes fewer memory resources.
The third optimization is focused on the improvement of multiple dispatch for object-oriented languages. One typical way of providing multiple dispatch is resolving method overload at runtime: depending on the dynamic types of the arguments, the appropriate method implementation is selected. We propose a multiple dispatch mechanism based on the type information of the arguments gathered by the compiler. With this information, a particular specialization of the method is generated, making the code to run significantly faster than reflection or nested type inspection. This approach has other benefits such as better maintainability and readability, lower code size, parameter generalization, early type error detection and fewer memory resources. This thesis was supervised by Prof. Francisco Ortin