Rust – the Ultimate Programming Language?

Rust has been reported as “most loved language” by some developer surveys, promising safety not available with other mainstream languages. What does that safety really mean? Let’s look deeper.

The idea of a “safe” language has been a hot topic for some time now, where Rust apparently reigns supreme. But what is that “safety”? The primary safety referred to is memory safety.

Languages such as C and C++ allow the programmer to access any memory location without any checks. For example, you can access an array element beyond the size of the array and the compiler wouldn’t know – it will happily compile the code, only to crash at runtime, if you’re lucky. If you are unlucky, some random memory would be accessed, which may cause incorrect behavior, which is worse. Rust guarantees that such random memory access will not compile, or if it does compile, it will panic (crash) at runtime rather than accessing memory outside the bounds of the array.

Is Rust the only language that provides this kind of memory safety? Not really. Other languages provide similar safety guarantees, such as Python, C#, and Java. One extra nicety you get with Rust is the unavailability of a null reference/pointer – there is no null in safe Rust.

“Safe” Rust? Rust has two personalities – one being Safe Rust – the “normal” Rust developers typically use, but there is a second, mostly hidden part of Rust – unsafe Rust. Unsafe Rust has all the capabilities (and dangers) of unsafe languages, just like C and C++. With unsafe Rust, you can (attempt) to access any memory location, and do things the Rust compiler will not normally allow. Ideally, Rust developers won’t use unsafe Rust, but nothing stops them from doing so. A canonical example is mutating a global variable – it must be done in an unsafe context:

static mut myglobal:i32 = 0;

fn do_work() {
    //myglobal += 1;  // does not compile

    unsafe {
        myglobal += 1;
    }
}

Unsafe is the power behind safe Rust. At the end of the day, hardware (like CPUs) is “unsafe” – it will do anything it’s told, attempt to access any memory location, etc. This is why safe Rust is provided – it wraps (in many cases) unsafe code, which is necessary to get things done. Safe Rust must trust Unsafe Rust implicitly – the compiler has no way of verifying the correctness of unsafe code. As long as developers use safe Rust, they get a guarantee from the Rust compiler and well-debugged unsafe Rust libraries, that their code would not misbehave as it relates to memory safety.

The Second Rust Safety

You may be wondering why, in the above example, is the access to a mutable global variable requires an unsafe context? This brings in the second part of Rust safety, which really sets it apart from other memory-safe languages – safety in a concurrent environment.

Typical code is multithreaded these days, at least for the purpose of utilizing today’s multicore systems. Writing correct multithreaded code, however, is far from trivial. Latent bugs may lurk in code for years, only manifesting at what appears to be random times, making these bugs difficult to fix as it’s difficult to reproduce. Bugs that cannot be reliably reproduced cannot be truly fixed.

For example, languages such as C# and Java will allow you to access shared data concurrently from multiple threads, where at least one of the threads is writing to the shared data. This is the definition of a data race, something to avoid at all costs. These languages provide facilities to deal with data races, but there is no enforcement in using them correctly (or at all). For example, C# has the lock keyword that can be used to execute a block of code by one thread at a time, avoiding a data race if that block accesses the shared data. What happens if another block of code accesses the same shared data without using the correct lock statement (using the same internal lock) or no lock at all? You get a data race again, and because of timing, it would occur at unpredictable times, making it difficult to identify, let alone fix.

This is where Rust offers a different approach. Its famous ownership model, which caters for memory safety is also the one providing concurrency safety. If you try to access shared data from multiple threads, the compiler will refuse to compile the code. This is exactly what happens in the global variable example. The compiler cannot prove that every piece of code accesses the global variable in a multithreaded-safe way. To use safe code, the data to be protected can be wrapped in a Mutex:

static myglobal2: Mutex<i32> = Mutex::new(0);

fn do_work() {
    let mut data = myglobal2.lock().unwrap();
    *data += 1;
}

Notice the Mutex is not mutable. With the above code, there is no way to access the shared data without going through the mutex. This is really powerful, and is not offered by other memory-safe languages. The code may look a bit weird, “dereferencing” the integer, but this is necessary in this case as the returned object from lock().unwrap() is not an integer reference, but a wrapper type where the deref operator is overloaded. With an i32, a Mutex is not really needed, as atomic<i32> would be more efficient and easier to use, but with more interesting objects, this is not too bad, such as the following thread-safe access to a Vec<i32>:

static mydata: Mutex<Vec<i32>> = Mutex::new(Vec::new());

fn do_work() {
    let mut v = mydata.lock().unwrap();
    v.push(42);
}

Is it all Rainbows and Unicorns?

Not quite. Rust is known for its steep learning curve. As any beginning Rust developer knows, it sometimes feels like a fight between the developer and the Rust compiler. With time, developers learn to appreciate the compiler, as it makes sure nasty stuff does not happen at runtime. But because of this strong guarantee, seemingly simple things turn out to be not so simple. For example, consider this simple function that returns the longer of two strings (technically string slices):

fn longest(a: &str, b: &str) -> &str {
    if a.len() > b.len() {
        a
    }
    else {
        b
    }
}

This function fails compilation with an error:

error[E0106]: missing lifetime specifier
  --> src/main.rs:26:33
   |
26 | fn longest(a: &str, b: &str) -> &str {
   |               ----     ----     ^ expected named lifetime parameter
   |
   = help: this function's return type contains a borrowed value, but the signature does not say whether it is borrowed from `a` or `b`
help: consider introducing a named lifetime parameter
   |
26 | fn longest<'a>(a: &'a str, b: &'a str) -> &'a str {
   |           ++++     ++          ++          ++

One of the powers of the Rust compiler is its explanations and suggestions, that help in resolving compiler errors in many cases, like this one. Lifetime specifiers introduce complexities when working with references (to whatever), because the compiler must be sure that the references don’t potentially outlive the objects they are referencing. If the compiler can’t prove it, it will fail compilation. The problem with the above code is that the compiler can’t tell which reference would be returned from calls to the longest function. Every call could return either a reference to a or to b. But why is that a problem? Here is an example:

let s1 = String::from("Hello");
let s3;

{
    let s2 = String::from("Goodbye");
    s3 = longest(&s1, &s2);
}

println!("Longest: {}", s3);

Here, s3 should reference either s1 or s2, but s2 only lives in the inner scope, which means that s3 potentially could reference a dead string; the Rust compiler cannot take that chance. Lifetime annotations tell the compiler what to expect:

fn longest<'a>(a: &'a str, b: &'a str) -> &'a str {
    if a.len() > b.len() {
        a
    }
    else {
        b
    }
}

The syntax looks weird before you get used to it (I won’t get into the details here), but the annotations specify that a, b and the returned reference all have the same lifetime. This makes the compiler happy in that it knows what is allowed to be passed in when calling longest. In particular, it will fail compilation of the call to longest above with the following error:

error[E0597]: `s2` does not live long enough
  --> src/main.rs:18:27
   |
17 |         let s2 = String::from("Goodbye");
   |             -- binding `s2` declared here
18 |         s3 = longest(&s1, &s2);
   |                           ^^^ borrowed value does not live long enough
19 |     }
   |     - `s2` dropped here while still borrowed
20 |
21 |     println!("Longest: {}", s3);
   |                             -- borrow later used here

This means that if s1 and s2 don’t have the same lifetimes, the code fails to compile.

Fortunately, lifetime annotations are not needed in the majority of cases (there are a few lifetime elision rules that help), and in many cases references are not returned or stored. Still, this is another new concept Rust developers need to learn which has no equivalent in other languages.

So, is Rust the Ultimate Language?

It’s not enough to have an elegant, performant, or whatever programming language to succeed. There are other factors, which are even more important. Just look at Python – it has terrible performance, but still extremely popular. Here are some factors I consider necessary for a language to thrive:

  • The language itself
  • Libraries availability
  • Ecosystem and community
  • Tooling

Libraries are critical – reinventing the wheel is not an option if software is to be delivered reliably and quickly enough. For Rust, the library ecosystem is large and growing rapidly. Just look at https://crates.io.

Tooling is another major one. Rust has its own package manager, cargo, which is extremely flexible and powerful, adding a lot to the ease of using external packages, and many other configurations. The other part of “tooling” is an IDE. This is where I thing Rust is still lacking. The most popular IDE used for Rust development is Visual Studio Code, with an extension called rust-analyzer. The extension is powerful and always improving, but VS Code itself is very generic, and is not geared towards Rust in particular.

Personally, myself being a user of Visual Studio mostly, I feel the difference. When working with C# or C++ in Visual Studio, the integration is tight and powerful. The debugger is more flexible and powerful than what VS Code can offer with its generality. Visual Studio “supports” Rust as well with the same rust-analyzer extension that is used in VS Code, but it’s not the same experience, and it feels like a “foreign” extension. Unfortunately, Microsoft doesn’t seem to have a plan for providing first-class support for Rust in Visual Studio even though they claim to use Rust a lot, even within the Windows OS.

One new tool worth mentioning is Jetbrain’s RustRover, which is dedicated to Rust. Personally, I feel some of the Jetbrains tools to be more complex to use than I would have liked, but it’s probably a matter of getting used to these tools.

Conclusion

Rust is going places, that’s for sure. It has lots of power and flexibility, and can be used for anything, from low-level stuff like UEFI application, to web applications, other server application, and the cloud. There are some parts of Rust syntax and philosophy that I don’t completely agree with, but any language has compromises.

There is no “perfect programming language”, of course, although I dreamed of creating one in my youth šŸ™‚

Published by

Unknown's avatar

Pavel Yosifovich

Developer, trainer, author and speaker. Loves all things software

2 thoughts on “Rust – the Ultimate Programming Language?”

Leave a reply to Megha P Cancel reply