Check out a free preview of the full The Last Algorithms Course You'll Need course:
The "Linked List Complexity" Lesson is part of the full, The Last Algorithms Course You'll Need course featured in this preview video. Here's what you'd learn in this lesson:

ThePrimeagen discusses the time and space complexity of linked lists. A student's question regarding the insertion of F is also covered in this segment.

Get Free Access Now

Transcript from the "Linked List Complexity" Lesson

[00:00:00]
>> Now, we've already talked about this, insertion, deletion. And now you can understand that if I were to get a value, I'd have to walk the list, right? So if I asked for the fifth value, there is no way when we have this type of structure right here, to get the fifth value.

[00:00:17] I literally have to write a loop that's gonna be like for 0 to 5, and current equals current next, right? I'm gonna have to walk it until we get to the correct value. Then I can return current's value, correct? We don't return the containing node, because then we leak our containing thing, and then someone can mess with the next and previous values, and boom our whole list sucks and you lose everything.

[00:00:42] So always that's more of a practical implementation standpoint. But you get the idea of a containing node, it's our abstraction for ourselves, not for the outside world. And so in general, getting head or tail, all those things, they can be pretty good operations. So head and tail is a unique get, based on the sense that if our linked list implementation has a single reference, head, that points to the first item in the list.

[00:01:07] Getting head can become a constant operation, no matter how big or small the list is. Getting tail can also be a constant operation, because again, we already have a defined pointer to that. It works every single time. All right, so deletion, of course, deletion at the front or the tail, right, the head or the tail, it's actually, again, constant operations, is it not?

[00:01:33] Because if you think about it, all you have to do is just take this and do the deletion algorithm, take this, do the deletion algorithm. But if you get in the middle, or you delete in the middle, you have to traverse to that point. So you have a two operation costs, the traversal plus the deletion.

[00:01:47] So deletion from the ends can be constant, but deletion in the middle can be costly if the traversal is costly. But the deletion itself, of course, is not costly. Prepending and appending, constant time, again because you are able to insert and just break the links from head or tail, so that's very, very fast.

[00:02:05] But insertion in the middle, it's the traversal, again, that can get you. So long as you can traverse fast, your insertion is fast. So this is kind of the weakness and strength of a length list. Is that yeah? It's a bit more complicated. Yes, it's not contiguous memory.

[00:02:19] There's a whole bunch of computer optimizations about contiguous memory, blah, blah, blah, blah, blah. What matters is that you are able to store these things. The list can be whatever size you want it to be. And deletion from the front or the back are extremely fast. And that is very, very important, because there's a lot of structures that you use in which you want to say delete the oldest item.

[00:02:40] The oldest item would be hopefully organized at the tail, therefore that's that constant operation, that's very, very fast. All right, so I don't think we're gonna implement this just because time is running out. And I think it would take us all, it's just a very heavy operation, you saw the next and all the previouses.

[00:02:57] But in general, this is kind of interface that you will see if you use that Kata program that I built. This is effectively, I believe, the exact interface that would be used in the linked list, which is just like here. Here's all the operations you expect it to be able to do.

[00:03:11] Now program the underlying operations. All right, let's move forward, because we got so many sweet things to take care of. I started with LinkedLists because they are the most foundational item. Oop, we do have one question.
>> I believe the insertion of F after A is arbitrary. Usually we'll append F either at the beginning or at the end of the LinkedList for constant time.

[00:03:35]
>> So the thing is, is that it just depends on how you're using the list. If you insert at the beginning of the end, you have a constant time guarantee as long as you have a reference to both head and tail or depending on which side you're inserting on.

[00:03:47] But if you don't have the reference to the node directly, it's the cost of the traversal to that node, and then it's the cost of the insertion, which is constant. So traversals where the big thing comes into play here, you have to be able to find or get the node quickly.

[00:04:02] And there are strategies to do that. And there even is a data structure we're gonna go over in which we have constant time traversal and constant time removal. So it'd be fantastic, right? We'll get there, but yes. So in general, I can't tell you how to use a LinkedList, cuz you use a LinkedList how you want to use a LinkedList, but I would strongly recommend not traversing.

[00:04:20] Anytime you do an n length algorithm, there's probably a data structure that's better at it, right? That makes usual sense. Sometimes that's as good as it gets, right? But for us, we don't have to worry about that. All right, so why did I start with LinkedLists? Well, they are super foundational.

[00:04:35] In fact, they're so foundational every single LinkedList is a graph. Every single LinkedList is technically a tree. You can think of it in these kind of senses. So we will get to these other data structures later, but LinkedLists are the most fundamental unit. And long as you understand how to traverse and move around in a LinkedList, you're going to understand all these other data structures.