Check out a free preview of the full A Tour of Web Capabilities course

The "Orientation, Touch, & Virtual Keyboard APIs" Lesson is part of the full, A Tour of Web Capabilities course featured in this preview video. Here's what you'd learn in this lesson:

Max shares examples of APIs for device orientation, touch events, and working with virtual keyboards. Some APIs are JavaScript-based. Others, like the virtual keyboard, also provide CSS APIs for adjusting styles when the virtual keyboard is present.

Preview
Close

Transcript from the "Orientation, Touch, & Virtual Keyboard APIs" Lesson

[00:00:00]
>> Another API, screen orientation. Not every API will be large as with geolocation. Some APIs are pretty simple. Screen orientation is one of those. With screen orientation, you can know the current orientation and lock it to one specific. And I separated that in green and light green. Remember those categories.

[00:00:20]
Actually, knowing the current orientation is knowing if you are in portrait or landscape. And that's green, which means it works everywhere, on iPhones, on Androids, even on desktop. And there is a second part of the API that is not available everywhere. For example, in Safari today, for iOS and iPad, it's experimental, so you have to enable an experimental feature.

[00:00:45]
That means that at least it will become stable at one point. That will let you lock the orientation. So lock the orientation is this idea of calling screen.orientation.lock. In that case, your web app can be in portrait forever, or in landscape forever, this is useful for games, for example, that today you cannot lock.

[00:01:10]
The only thing that you can do is using media queries, is like render like a model image saying, hey, please, turn your phone. Right or left okay, but with this API you will be able to lock that. So when you the user is browsing your web you can lock the orientation.

[00:01:27]
And you can specify which orientation, do you want portrait landscape and then we have portrait secondary, that is actually that's not a good one It's upside down. That's not a good one because the user will never know where the home screen button or the gesture is. So, no, don't do that.

[00:01:45]
Anyway, that was quick. Touch events. So we know that on mobile devices typically we are using our fingers, and I'm saying typically because also we have pens or pencils, we have stylus that we can use, but we do have touch events. Why mouse events? The standard classic mouse events are not good enough.

[00:02:09]
First, mouse events such as click, or mouse leave, or mouse down or hover. So mouse over, mouse out, the classic mouse events are not good enough because here we can use more than one finger. At the same time we have more than one pointer, okay?

[00:02:29]
And mouse events are not actually supporting that. Okay, that's one thing. That's the question, yeah.
>> The other thing that the mouse always exists, while screen's not always being touched.
>> Again.
>> The mouse icon never goes away even if we stopped moving it.
>> Okay, so the mouse pointer on the screen never goes away even if you don't have a mouse connected actually.

[00:02:53]
Yeah and but here, actually or with fingers, also on some screens, we also have pressure, we can apply pressure. Or an angle, okay, I'm not saying on every screen. In fact, on iOS, we don't have pressure anymore. We used to have pressure, but Apple decided to change that because no one was actually using that.

[00:03:16]
But on the MacBook, for those of you in the MacBook, the trackpad has pressure. And actually, you can detect that pressure, but not on mouse events. Well, we do have touch events then. They work only of course with touch screens. They have some basic events that you bind to the document or to any element, touchstart, touchend, touchcancel.

[00:03:39]
By the way, what's touchcancel? So let's say that you have your finger on the screen, you're moving the finger, that's touchmove, you do this, it's touchend. But why cancel? Well, maybe you have your hand your finger is there, but then someone's calling you. You have an incoming call.

[00:03:59]
So if you have your finger over the green button, the accept the call button, it's not going to accept the call. Because you had your finger there, your touch was there before the call. And in that case, it's a cancel. Your finger is still there, but it's going to be canceled, okay?

[00:04:15]
So we have touchmove when you drag. And this event is fired with multiple coordinates for multi-touch, including the force if it's available. So how many fingers can you detect? It depends on the device actually, on iPhones and Android, on iPhones is typically five, on Android depends on the hardware, typically is between four and eight.

[00:04:44]
On iPad it's 11. Don't ask me why 11, you will see an array of coordinates. Okay, if you wanna play with this, just to see how this work, so I have a multi-touch tester here. They have the code there if you want, let me turn on, turn off the code.

[00:05:05]
So if you touch with one finger over these cells, you will get yellow. With two fingers, you get pink, with three fingers, light blue, four fingers. You say what? Look at this nothing, why? Because this phone detects up to three. An iPhone will take four. Okay, it's just that.

[00:05:33]
Is it for fun, for games, or for some special gesture like two fingers? Tapping with two fingers will do something. We'll go to the home, things like that. It's actually pretty simple. It's not really complicated, okay, but it's kind of a capability that we have. Touch events were actually an Apple's invention.

[00:05:56]
They were created by Apple for the iPhone. Because the iPhone was the first multi-touch device on the market, at least on the end user market. And also, they had a patent on that. So that's why other vendors were not implementing the touch events API, because they have the risk of having a legal issue with Apple.

[00:06:20]
So that's why the web community created another spec that then at the end Apple also implement. It's called Pointer Events. And today, we typically have both on every mobile device, so it's up to you which one to use. Touch events the original Apple spec, or pointer events, I will explain the difference in a second.

[00:06:45]
Pointer events are based on mouse events. So they're pretty similar to how mouse events work, and it's multi pointer. They can work with mouse, with trackpads, with touch, with pens. And anything that you can connect to your device. And you will receive one call, one execution to your event handler per interaction.

[00:07:10]
For example, if I have three fingers, I receive three event handler calls. On touch events you receive one with an array, okay, makes sense? In this case you will receive three different execution calls and you will get the idea of the finger, kind of. Tell you. The events are the same as mouse, instead of mouse down, now it's pointer down.

[00:07:39]
Pointerdown, pointerup, pointercancel. So it's like based on mouse events, pointerover, pointerout, move, enter and leave. And the event receives coordinates, one coordinate because it's for one pointer. A pointer ID that you can use that to track that over time. The pointer type, so you will know if it's a mouse, if it's a finger, if it's a pen, an optional pressure tilt and twist data.

[00:08:10]
That is not available on every hardware. It depends on the hardware. It can be a Samsung pen, an Apple pencil, for example, and you can receive that metadata, make sense? And this capability is currently available on every device, desktop and mobile. Virtual keyboard, this is interesting, and I think that more people should start using it.

[00:08:37]
And we are waiting for Apple. Apple is not implementing this API yet. What's this? This is for the built in keyboard, you know that the mobile devices we have a built in keyboard on screen. Well, can we show that virtual keyboard or hide the virtual keyboard from JavaScript?

[00:08:55]
No, not really. Can we detect that the virtual keyboard is on the screen and how much space it's taking? No, well, that API will let us do that. So it's navigator.virtualkeyboard, but it's already today working on Chrome, on desktop on mobile devices. You say, desktop, well on Windows for example, you can have some devices on Windows 10 and 11.

[00:09:25]
That you can detach the physical keyboard and in that case, you will get a virtual keyboard, okay? So you can show it, hide it. And there is an event, geometryChanged, where you receive coordinates of the x, y width and height of the keyboard on the screen. So then you can adjust your design or your UI accordingly.

[00:09:51]
Does it make sense? So also we have new CSS environmental variables that you can use for marginings, for paddings, that will tell you where the keyboard starts and ends. So you can work with CSS. I have a video to show you what that looked like with a sample.

[00:10:14]
So actually, when you when you open the keyboard, you can see that everything moves automatically. Well, you do that with CSS, just by applying the right margins and paddings. And actually they created this API for that particular sample. When you have a chat. There was actually complicated, sometimes everything moves out.

[00:10:36]
When in this case, we are actually making the whole scrollable area smaller because of the virtual keyboard.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now