seanmonstar

Jan 16 2020

warp v0.2

Warp is a Rust web server framework focusing on composability and strongly-typed APIs.

Today sees the release of v0.2!

Async and Await

The most exciting part of this release is the upgrade to std::future, so you can now use async/await for cleaner flow control. Due to how warp encourages composition of filters, this is most noticeable at the “ends” of a filter chain, where an application is doing its “business logic”, converting input into actions and replies. And that’s where most of the app code is!

Services

In the original release of warp, I wrote:

We’d like for warp to be able to make use of all the great tower middleware that already exists.

As part of this release, that is now possible! Any Filter which returns a Reply can now be converted into a Service using warp::service(filter). This means you can wrap your filters with any of the growing middlewares, as described in the hyper v0.13 announcement.

Thanks

This was a lot of work by over 60 new contributors, including the massive std::future refactor by new collaborator @jxs.

Be sure to check the changelog for all the goodies!

Dec 30 2019

reqwest v0.10

reqwest is a higher-level HTTP client for Rust. Let me introduce you the v0.10 release that adds async/await support!

Some headline features are:

  • Add std::future::Future support (hello async/await).
  • Add experimental WASM support.
  • Change the default client API to async, moving the previous synchronous API to reqwest::blocking.
  • Make many “extra” features optional to reduce unnecessary dependencies (blocking, cookies, gzip, json, etc).
  • Enable automatic “system” proxy detection.

Here’s a simple streaming example using the new syntax:

async fn example() -> Result<(), Box<dyn std::error::Error>> {
    let mut resp = reqwest::get("https://hyper.rs").await?;

    while let Some(chunk) = resp.chunk().await? {
        stdout().write_all(&chunk).await?;
    }

    Ok(())
} 

I want to thank all those contributing to make the best Rust HTTP client even better!

Take a look at the changelog for all the details.

Dec 10 2019

hyper v0.13

After a few months of alpha development, the final release of hyper v0.13.0 is now ready! hyper is a maturing HTTP library written in Rust, already one of the fastest out there1, and trusted by many for its correctness.

The highlights of this release:

  • Full async/await support.
  • Tokio v0.2 upgrade.
  • Adopting tower::Service.

async / await

The premise of async and await in Rust is to allow writing code that uses Futures in a similar style to “blocking” code. No more combinators, no more “callbacks”, just slap .await on the end of the expression. For instance, here’s how we can use the Client:

#[tokio::main]
async fn main() -> Result {
    let client = Client::new();

    let mut resp = client.get("http://httpbin.org/ip".parse()?).await?;
    println!("Status: {}", resp.status());
    println!("Headers: {:#?}\n", resp.headers());

    while let Some(chunk) = resp.body_mut().data().await {
        stdout().write_all(&chunk?).await?;
    }

    Ok(())
}

Connecting, writing the request, receiving the response, streaming the body, and writing to stdout can all be done without “blocking” the thread. Instead, with the use of await, just that future will make as much progress as it can without blocking, and then go to a pending state and register a notification when more progress could be made. And yet, it looks just like regular “blocking” code. This should hugely improve the ergonomics of writing server code that scales under load.

Tokio v0.2

Tokio is a phenomenal async IO runtime for Rust, and hyper has built-in support by default. The Tokio v0.2 upgrade includes async/await support, a significant scheduler improvement, and even faster compile times.

Tower Services

Tower is an RPC design that builds off Twitter’s “your server as a function”2. It defines the base Service trait, which handles some request-like value, and asynchronously returns a response-like value. The tower-service crate is minimal, and protocol-agnostic. Our hope is others in the ecosystem can be just use Service and http, and not have to depend directly on hyper3.

An additional benefit of integrating with Tower is being able to make use of many of the middleware we’ve already developed.

  • Server middleware:

    let svc = ServiceBuilder::new()
      // Reject the request early if concurrency limit is hit
      .load_shed()
      // Only allow 1,000 requests in flight at a time
      .concurrency_limit(1_000)
      // Cancel requests that hang too long
      .timeout(Duration::from_secs(30))
      // All wrapped around your application logic
      .service(your_app_service);
    
  • Or wrapping a Client:

    let svc = ServiceBuilder::new()
      // Retry requests depending on the responses or errors
      .retry(your_retry_policy)
      // Cancel when the server takes too long
      .timeout(Duration::from_secs(5)
      // Load balance using P2C
      .layer(p2c::peak_ewma(dns_resolver))
      // Wrapping a hyper::Client
      .service(hyper_client);
    

Additionally, most async fn(In) -> Out things in hyper are now just a Service. This means you can easily add these middleware to custom resolvers or connectors, for instance. Uses include adding a timeout or whitelist to a resolver.

v0.13.0

This probably the most exciting release of hyper yet. It’s all thanks to the 30+ contributors tireless efforts this release that we’ve gotten this far. Our production users continue to help us improve hyper’s correctness, performance, and features. The current goal is that we can finish up the remaining design questions and release hyper 1.0 in the middle of 2020.

To see even more details, check out the v0.13.0 changelog!


  1. Always take benchmarks with a carton of salt. ↩︎

  2. “Your Server as a Function” (PDF) ↩︎

  3. This is similar to Python’s WSGI, Ruby’s Rack, and Java’s Servlet. ↩︎

Dec 2 2019

http v0.2

A couple years ago, we released the beginning of the http crate. It’s purpose was to allow a common API for the ecosystem to interact with HTTP types, without those types referring to a specific implementation. We’ve seen great things sprout up since then!

Today marks the 0.2 release, a chance to make some minor breaking changes, with the hopes that this 0.2 version can soon just be promoted to 1.0. So, what has changed?

HTTP/3

A seemingly simple change is adding the Version::HTTP_3 constant. However, we couldn’t add it in 0.1 due to an unexpected compiler behavior that allowed exhaustive matching on the Version constants even though the internal enum wasn’t exposed. This time, we’ve made sure to prevent exhaustive matches, so we can add new versions in the future.

Builders are now by-value

There are some pretty useful builders to construct a Request, Response, or Uri. In 0.1, they were “by-reference” builders, meaning that each builder method took &mut self and returned &mut Builder. Now, they take self and return Builder. There’s pros and cons for each pattern, but the weightiest one that made us change was the nature of “consuming” the builder once finished. To “build” a “by-ref” builder requires that either the data inside be cloned, or the builder be left in a “don’t build me again” state. This change now makes it clearer that a builder cannot used again, as it will now be a compiler error.

Reduced public dependencies

To help meet the goal of promoting to http v1.0, we’ve reduced the number of public dependencies to 0. There’s still a way to make use of bytes to reduce copies, but it’s now exposed in a way that there’s no API contract. This allows http to reach 1.0 even if bytes takes longer.

Next

We expect the ecosystem to start updating to http 0.2 so you all can have the improvements as soon as possible. For example, hyper should also be ready hopefully this week. Check the changelog for the full details!

Oct 8 2019

reqwest alpha.await

reqwest is a higher-level HTTP client for Rust. I’m delighted to announce the first alpha release that brings async/await support!

Some headline features are:

  • Add std::future::Future support (hello async/await).
  • Add experimental WASM support.
  • Change the default client API to async, moving the previous synchronous API to reqwest::blocking.
  • Make many “extra” features optional to reduce unnecessary dependencies (blocking, cookies, gzip, json).

Hey look, a cute example using the new async/await syntax with reqwest:

dbg!(reqwest::get("https://hyper.rs").await?.text().await?);

These alpha versions are depending on Rust 1.39, which (as of this post) aren’t stable yet. Some other things may change in reqwest before the full release (can other features be made optional?), but the alphas allow others to experiment now.

My sincere thanks to all that help contribute to reqwest! Enjoy <3

Sep 4 2019

hyper alpha supports async/await

I’m excited to announce the first alpha of hyper 0.13. hyper is a maturing HTTP library written in Rust, already one of the fastest out there, and trusted by many for its correctness.

This alpha release brings support for the new std::future::Future. The reason this is so exciting is that this allows using the new async/await syntax that will be stabilizing in Rust 1.39.

Example

The follow example shows how one can use async/await to dump a response to the console:

#[tokio::main]
async fn main() -> Result<(), Error> {
    let client = Client::new();

    let resp = client.get("http://httpbin.org/ip".parse()?).await?;
    println!("Status: {}", resp.status());
    println!("Headers: {:#?}\n", resp.headers());

    while let Some(chunk) = resp.body_mut().next().await {
        stdout().write_all(&chunk?)?;
    }

    Ok(())
}

The same async/await style can be used for writing servers as well!

Changes to come

Besides the change from futures 0.1 to std::future::Future and how writing code with async/await, much of hyper’s API will feel very similar. Still, there a some technically breaking changes that will be included in the 0.13 as well.

Embracing tower::Service

During hyper 0.12, servers were defined via the hyper::service::Service trait. Since then, we’ve been working hard on a general Service interface, and building powerful middleware that can utilize it. Our hope is that eventually, applications can be generic over Service and the http types, and a user could choose their backend that plugs right in (such as hyper).

Consider a small example that handles many mundane things for you:

let svc = ServiceBuilder::new()
    // Reject the request early if concurrency limit is hit
    .load_shed()
    // Only allow 1,000 requests in flight at a time
    .concurrency_limit(1_000)
    // Cancel requests that hang too long
    .timeout(Duration::from_secs(30))
    // All wrapped around your application logic
    .service(your_app_service);

The tower::Service trait easily allows everyone to power up their servers and clients!

Alpha One

This first alpha is to allow people to try out writing HTTP servers and clients using the new async/await syntax. All the features from 0.12 work in this release. However, not all the API changes have been finalized, so as with other alphas, there will likely be breakage between alpha releases as we fine tune things.

But for now, get your fresh copy of hyper v0.13.0-alpha.1 here!

[dependencies]
hyper = "=0.13.0-alpha.1"
Dec 18 2018

warp v0.1.10

warp is a breakthrough server web framework for the Rust language.

Today sees the 11th release of warp, v0.1.10! I wanted to show off the new features, and highlight some of the amazing work that has appeared since the initial announcement.

v0.1.10

  • TLS Support: there is now optional support TLS, enabled via the tls feature.
  • CORS: There is a “wrapping” filter (warp’s idea of middleware) that can provide CORS to any existing Filter.
  • Retrieving the remote address.
  • Websocket test helpers: testing filters has always been easy thanks to warp::test, and now, warp::test::ws allows for easy testing of Websocket routes specifically.

Previously

In case you missed it, some highlights of work that has landed before v0.1.10:

  • Rejection system clarity: warp initially had a rejection system that would try to automatically translate rejections into HTTP responses. It wasn’t that scalable. The rejection system now is simply errors for why a request failed, and Filter::recover can be used to translate those into specific HTTP responses.
  • warp::fs filters automatically support Conditional and Range requests, and try to use the OS blocksize for improved performance all around.
  • Streaming request and response bodies.
  • Support for custom transports besides the default TCP.
  • And many smaller improvements and new filters.

Next Focus: Service

When I announced warp initially, I had mentioned the Service trait, and the tower-web framework. There are still plans to see warp and tower-web merge, and current efforts have been around solidifying the Service trait itself.

As a recap, the Service trait is essentially some extra pieces on top of an async fn(Request) -> Response. Our aim is that Service and the http crate are the most basic interface that the ecosystem can gather behind. Server implementations and frameworks could be compatible with each other, as long they both just knew about Service and http.1

Being the common interface, it then becomes easier for frameworks and users to add in “tower middleware”, since a wrapped Service is still a Service. There are already several tower middlewares that have been developed in support of Linkerd2.

We recently published a new release of tower-service. A prototype now exists for warp to be able to convert a Filter directly into an HTTP Service. From there, we could simply run it directly via hyper::Server. Other HTTP server implementations that supported Service could theoretically also just run it, and the user would still just deal with warp types.

The future of webdev in Rust looks bright!


  1. This is similar to other languages, like WSGI, Rack, Servlet, WAI, and the like. ↩︎

Aug 1 2018

warp

Over the past several months, I’ve been working a web framework in Rust. I wanted to make use of the new hyper 0.12 changes, so the framework is just as fast, is asynchronous, and benefits from all the improvements found powering Linkerd. More importantly, I wanted there to be a reason for making a new framework; it couldn’t just be yet another framework with the only difference being I’ve written it. Instead, the way this framework is used is quite different than many that exist. In doing so, it expresses a strong opinion, which might not match your previous experiences, but I believe it manages to do something really special.

I’m super excited to reveal warp, a joint project with @carllerche.

Background

What makes warp different?

I’ve been working on web servers for years. Before coming to Rust, I did several things in PHP, moved over to Python, and then shifted again to Nodejs. I’ve tried many frameworks. I found that I often times need to configure predicates, like certain headers required, query parameters needed, etc, and sometimes, I need to configure that a set of routes should be “mounted” at a different path, and possibly want certain predicates there too. I noticed the concept of mounting or sub-routes or sub-resources or whatever the framework calls them didn’t feel… natural, at least to me. It frequently felt like a secondary concept, occasionally not having all the power that a standard route does.

I’ve also been working in Rust for several years now, and what kept using the language was its powerful type system1. The more I wrote Rust, and learned how amazing the “fearless refactoring” is, the more I hated working in dynamic languages (in my case, it was a large Nodejs server), as trying to refactor pieces inevitably would remind us (in production) that our supposedly comprehensive test suite still had holes in it. I wanted app-specific types to save me from shipping bugs.

A few months ago, I found the Finch library in Scala, and shortly after, Akka, both of which instead just treat everything as a sort of function converting from input to output, and from there, you just chain together these different pieces, and they compose and reuse really well. Scala also has a powerful type system, and those frameworks embrace converting information from HTTP messages into app-specific types. I fell in love.

The thing that makes warp special is its Filter system.

Filters

A Filter in warp is essentially a function that can operate on some input, either something from a request, or something from a previous Filter, and returns some output, which could be some app-specific type you wish to pass around, or can be some reply to send back as an HTTP response. That might sound simple, but the exciting part is the combinators that exist on the Filter trait. These allow composing smaller Filters into larger ones, allowing you modularize, and reuse any part of your web server.

Let me show you what I mean. Suppose you need to piece together data from several different places of a request before your have your domain object. Maybe an ID is a path segment, some verification is in a header, and other data is in the body.

let id = warp::path::param();
let verify = warp::header("my-app-header");
let body = warp::body::json();

Each of these is a single Filter. We can combine them together with and, and then map the combined result to get a really natural feeling handler:

let route = id
    .and(verify)
    .and(body)
    .map(|id: u64, ver: MyVerification, body: MyAppThingy| {
        // ...
    });

The above route is a new Filter. It has combined the results of the others, and provided their results naturally to the supplied function for map. Additionally, the types are enforced, cause well yea, this is Rust! If you were to change around one of the filters such that it returned a different type, the compiler would let you know that you need to adjust for that change.

This combining of results is smart: it is able to automatically toss results that are nothing (well, unit, so ()), instead of passing worthless unit arguments to your handlers. So if you needed to combine a new Filter into this route that only checks some request values to determine if the request is valid, and otherwise returns nothing, your handler doesn’t need to change.

Besides dropping units, did you notice how even though multiple results were combined together, the map closure received each as individual arguments? This greatly improves development, since that means that id.and(verify).and(body) is actually exactly the same as id.and(verify.and(body)), but using just tuples would have changed around the signature of the results. The routing documentation shows more ways this is useful.

This concept powers everything in warp. Once you know you can match a single path segment via warp::path("foo"), then the idea of mounting doesn’t need to be something special. You just have your filter chain for a set of endpoints, and simply “and” it with a new path filter. If your “mount” location needs to also gate on headers, or something else, you can just and those Filters as well.

Built-in functionality

As awesome as the Filter system is, if warp didn’t provide common web server features, it’d still be annoying to work with. Thus, warp provides a bunch of built-in Filters, allowing you compose the functionality you need to descibe each route or resource or sub-whatever.

  • Path routing and parameter extraction
  • Header requirements and extraction
  • Query string deserialization
  • JSON and Form bodies
  • Static Files and Directories
  • Websockets
  • Access logging
  • And others, and more being added.

The docs explains how to use each, and the examples go more in-depth on how to combine them to make actual web servers.

tower-web

A few months ago, there was mention of a web framework, tower-web, that’d be coming soon. The concept behind it is to provide a web framework built around tower’s Service trait. That is still coming. warp is being released right now for a couple reasons:

  1. The Filter system is really awesome, as touched on above.
  2. To explore some ideas before solidifying tower and tower-web. We’d like for warp to be able to make use of all the great tower middleware that already exists.

Expect to hear more about it, and how it fits with warp, soon!

warp

This is warp v0.1. It’s awesome. It’s fast. It’s safe. It’s correct. There’s documentation, and examples, and an issue tracker to file bugs and track progress of new Filters that are coming (CORS almost ready). I want to thank those of you who tried warp out privately and sent feedback in, it was super valuable!


  1. I realize other languages also have nice type systems, but I didn’t usually want to pay the cost associated with those languages. Rust just gives me what I want. ↩︎

Jun 26 2018

Better HTTP Upgrades with hyper

It’s been possible to handle HTTP Upgrades (like Websockets) in hyper if you made use of the low-level APIs in the server and client, but it wasn’t especially nice to work with. It also meant to handle upgrades, you couldn’t use the nicer things that hyper takes care of for you with Client or Server.

In hyper v0.12.31, handling upgrades is much easier!

Body::on_upgrade()

The mechanism for handling upgrades and CONNECT is unified into a Future on the hyper::Body type. The way this works is in either case, Client or Server, you’re already receiving a hyper::Body that represents the streamed body from the remote. It also happens to be a great place to store a flag of whether an HTTP Upgrade is possible.

For now, after using a Body to get any data, you can convert it into a Future that yields the “upgraded” connection on success. With lessons learned from the lower-level upgrade process, the returned Upgraded opts for a default of easier-to-use. It implements Read and Write, and those implementations will check the read buffer for any previously read bytes before the upgrade completed. The easiest thing to do is just to treat the yielded Upgraded type as some impl Read + Write, and use it as such for the next protocol you plan to use.

In order to provide this API, the Upgraded holds the IO type as a boxed trait object internally. If dynamic dispatch is undesirable, there is Upgraded::downcast to try to convert into the original type, along with the remaining read buffer.

Take a look at these simplified examples upgrading to Websockets:

Client Upgrades

let client = Client::new();

let req = Request::builder()
    .uri("http://example.local/chat")
    .header("upgrade", "websocket")
    .header("connection", "upgrade")
    .body(Body::empty()
    .unwrap();

// This builds a future that should be spawned on an executor...
client
    .request(req)
    .and_then(|res| {
        res.into_body().on_upgrade()
    })
    .and_then(|upgraded| {
        // just use this as an IO
        websocket_lib::client_thing(upgraded)
    })

Server Upgrades

let service = service_fn_ok(|req| {
    // Just assuming its always an upgrade for this example...

    let upgrade = req
        .into_body()
        .on_upgrade()
        .map(|upgraded| {
            websocket_lib::server_thing(upgraded);
        })
        .map_err(|err| eprintln!("upgrade error: {}", err));

    hyper::rt::spawn(upgrade);

    Response::builder()
        .status(101)
        .header("upgrade", "websocket")
        .header("connection", "upgrade")
        .body(Body::empty())
        .unwrap()
});

There’s a fuller example of a client and server that upgrade in the same program as well.


  1. Most support was made available in v0.12.2, but v0.12.3 fixed a couple missing pieces when trying to do CONNECT requests over the Client. Everything else worked in v0.12.2↩︎

Jun 1 2018

hyper v0.12

Today sees the release of hyper v0.12.0, a fast and correct HTTP library for the Rust language.

This release adds support for several new features, while taking the opportunity to fix some annoyances, and improve the extreme speeds! Look, some wild bullet points appeared:

  • Faster!
  • More correct.
  • Embraces the http crate types.
  • Adds HTTP2 support to both the client and server.
  • The Client and Server are easier to setup and use.
  • Better runtime support.
  • Better body streams.

Faster

hyper 0.11 is already one of the fastest HTTP libraries out there. However, the original server API based around ServerProto prevented it from going full speed. While a new API with a faster dispatcher has existed for a little while now, 0.12 is able to remove that slower way completely, and just default everyone to hyperspeed. But it doesn’t stop there.

Switching the headers types has meant a noticeable boost in serializing of HTTP headers, skipping std::fmt overhead, and removing the need for replacing newlines in header values, thanks to HeaderValue preventing those bytes entirely. While replacing the serialization pathway, further optimizations of checking for semantically important headers is able to be done while serializing, reducing hashmap lookups.

By taking control of the trait used to represent bodies, implementations of Payload can provide hints such as if the body is empty, or exactly how big it thinks it is. This allows some more optimizations, and some API niceties, explained later on in this article.

HTTP Correctness

Being fast is important, but being correct is critical.

There’s quite a few edge cases in the HTTP/1 protocol, and an implementation that you trust to use should make sure to handle all those cases. Browsing various other HTTP/1 implementations, you’ll find that some of the faster ones ignore them, leaving it up to the users to protect themselves manually. Some of the edge cases can mean security vulnerabilities (like not sanitizing newlines out of headers resulting in message splitting), others just mean clients cannot understand you.

hyper’s correct handling of HTTP/1 continues to improve, all while getting faster. For instance, consider some of these things an HTTP implementation should handle:

  • Receiving a Request with Content-Length: 100 and Transfer-Encoding: chunked means it must be chunked encoded.
  • The reverse is also true: if you insert both a Content-Length header and Transfer-Encoding: chunked header, perhaps in different parts of your code, the following body must be chunked.
  • When responding to a CONNECT request, 200 OK responses cannot have bodies. However, responses with other status codes can!
  • If you send a Content-Length: 100 header, but then try to send a body of 150 bytes, the recipient may try to parse bytes 101-150 as a new message.

There’s plenty more, spelled out in RFC7230, and hyper tries to repair any that it finds, or provides an error if there are no repairs that can be done, instead of silently ignoring and having incorrect state compared to the peer. Thanks to hyper’s usage in the Conduit proxy, we continue to find and fix more and more!

HTTP2

There is now built-in support for both the client and server to make use HTTP2.

  • The Server by default will handle both HTTP/1 and HTTP2 connections, and can be configured to only accept HTTP2 if desired.
  • The Client requires explicit configuration for now, enabling a requirement for “prior knowledge” usage of HTTP2. Work is happening to allow for ALPN support to get “automatic” HTTP/1 + HTTP2 usage.

Streaming

As mentioned above, hyper changed the way it describes its streaming bodies. Before, it was just a futures::Stream, but that meant the only thing hyper knew about it was that it could produce some data. Now, hyper defines a Payload trait, which besides streaming data, can also stream trailers, declare its length, and say when it is finished.

By owning the Payload trait, hyper can grow its capabilities when needed, such as to add HTTP2 push promises, or other new features.

The Data type of payloads changed from have AsRef<[u8]> bounds to Buf instead. This importantly allows for custom application implementations to return data chunks that may be from different contiguous memory sources, taking advantage of hyper’s automatic writev support, meaning less copies!

Ergonomics

Using both clients and servers has gotten much easier. There is default support for the new Tokio runtime, removing a lot of boilerplate from getting an event loop and reactor up and running.

Look how simple it is to get a naive HTTP proxy working:

let addr = ([0, 0, 0, 0], 3000).into();

let client = Client::new();
let new_svc = move || {
    let client = client.clone();
    service_fn(move |req|{
        client.request(req)
    })
};

let server = Server::bind(&addr)
    .serve(new_svc)
    .map_err(|e| eprintln!("server error: {}", e));

hyper::rt::run(server);

Of course, the example above doesn’t actually do all the things a proxy should do, but it shows off how simple it is to create a client and server.

These easy builders are enabled by default via the runtime Cargo feature. Importantly, this means that if you already have some other sort of runtime, hyper can still integrate! Disabling the runtime feature removes the dependency on Tokio, though you will need to use the configuration to explain how your runtime works to hyper.

Errors

There has been confusion with hyper’s usage of errors in 0.11 that 0.12 cleans up. Before, both Service::Error and Stream::Error were required to be hyper::Error. However, it wasn’t clear what that really meant, and picking how to create a hyper::Error was equally confusing.

Now, any place that a user would return an error to hyper, it has been changed to just have the bounds Into<Box<std::error::Error>>. This means you can return any custom application error type that implements std::error::Error. hyper doesn’t particularly do anything special with those errors, besides some logging and trying to pass them back up to a higher level error handler, but it’s at least easier to determine what error to return: whatever you want to describe the failure you encountered!

The documentation around Service::Error has been clarified with this reminder as well: servers usually shouldn’t return an error back to hyper in most cases, and instead should return a Response with an appropriate 400 or 500 status code. Returning an error in a server will signal to hyper that it should abort the connection immediately.

Thanks!

Thanks to all those who helped get us this far! Whether it’s through writing code, diagnosing bugs, discussing design and issues, running pre-release versions, it’s all what gets our community to a place with such awesome tools. Thank you.

As a wrap up, some links if you want to see more:

Page 1 of 48