Este vídeo pertenece al curso Blockchain - Sicherheit auch ohne Trust Center de openHPI. ¿Quiere ver más?
An error occurred while loading the video player, or it takes a long time to initialize. You can try clearing your browser cache. Please try again later and contact the helpdesk if the problem persists.
Scroll to current position
- 00:00We now want to take part in our openHPI course on blockchain security. first take a look at the so-called distributed systems.
- 00:10And as an example, we take another bank, we take a bank and its customers,
- 00:15and then we have the situation where the bank all information on account balances and customer transfers on their systems, on their computers.
- 00:28And now the bank is successful, more customers are being added, and the question is, how can you scale it, how can the service are now also available to a larger number of users.
- 00:39There's a first approach to scaling, which is the so-called scale-up.
- 00:45There the basic idea is simple, or vertical scaling also called, there's the basic idea that these application systems,
- 00:52the bank needs to serve its customers, is simply performed on larger and larger computers.
- 01:00That means the bank buys when it has new customers, just a bigger computer, and the next time it comes, an even bigger computer.
- 01:09And of course this is a big financial investment, Upgrades are also required, not only in the area of hardware, but also in the area of software.
- 01:16So the bank scales by enlarging the internal system of the bank.
- 01:22There's a limit to that, and here I am - I don't know if you can see it very clearly - the example of a telephone exchange.
- 01:30That's the same subject, when the phone system was introduced, a very nice picture, At the end of the 19th century in Stockholm, a telegraph pole.
- 01:43And because he was supposed to serve so many customers. - You're looking at a maze of wires-
- 01:48so that raises a lot of questions, which is, how's the load distribution? what about fail-safety?
- 01:56then this one system here, this central tower, if he makes a mistake, the whole system's broken.
- 02:04Can this be further extended?
- 02:07Of course, there are limits somewhere, what about the trust?
- 02:11So these are all questions that lead to a second approach, for scaling, called Scale-Out, or horizontal scaling,
- 02:24Here it is now a matter of that the applications of the bank, if there's more load, more customers coming,
- 02:30cannot be realized on a larger computer, but on several computers of the same size.
- 02:37The picture shows what is here.
- 02:41So if you do this scale-out, this horizontal scaling, then there are a number of advantages, because you can also spread it geographically.
- 02:54The customers are then spread over a large area, perhaps even globally, can then also distribute these different computing systems and bring them close to their customers.
- 03:06If one of these computers is damaged, then the system as a whole can continue to function, redundancy is possible.
- 03:15One can also, because the ever larger computers are of course much more expensive,
- 03:21you can also use the ordinary, cheap, possible hardware applications for consumers,
- 03:28so that was the secret of Google's success, who simply put a large number of simple calculators in there, in order to satisfy the worldwide search requests.
- 03:38Even upgrades require only that you simply add a new or a group of new hardware purchases in addition and not some big supersystem like that buying itself now.
- 03:50And what we're going to achieve is the bank has a distributed system with which you can send their customers (unv. #00:03:59-8# ).
- 03:59The definition for Andrew Tannenbaum - a famous computer scientist - is, such a distributed system is a union of independent computers,
- 04:10who are responsible to the user, to the customer of the Bank, to the as a single system.
- 04:17He does not know on which computer his bank transfer is currently being processed. or his account adjustment is revised.
- 04:25To him, it looks like the bank's system.
- 04:30Actually, when you get closer, but it's a consortium of independent computers,
- 04:37connected to a network so they can exchange data, that implement communication between each other via message exchange,
- 04:49who perform a common task - in our example it was the operation of the bank -
- 04:56that provide independent data storage, so that the bank can present itself to the customers is still presented as a single system,
- 05:07but the banks themselves have different systems in use, which ensure an exchange of messages between each other,
- 05:16that then also the account display on the different computers has the same numbers for a customer, for example.
- 05:24This is what we call a distributed system.
- 05:29This does not only occur in the banking context, but in many different contexts, and it's marked as a distributed system,
- 05:38that there's no master there to tell everyone now, I'll let you know how to do it,
- 05:45but they're connected to each other, can exchange messages,
- 05:49must thereby ensure that they also reveal the same information and from the outside they look like a system.
- 05:58The challenge is to achieve exactly that, that a distributed system is a system, where the individual tasks are not done by a computer,
- 06:11but be solved by a computer network, these are the challenges.
- 06:17And Leslie Lampert, another famous computer scientist, it got to the point,
- 06:22a distributed system is a system, where I can't do my job because a computer, that I haven't heard about yet isn't working.
- 06:31Every request must receive a correct answer, no matter what computer it comes from in this distributed system,
- 06:42each request also receives an answer, the communication between different computers can be interrupted - that is the case with such systems -
- 06:52and then for the customer as a uniform system, that solves tasks, to present them, that is the challenge.
- 07:01Now, theoretically, formally, it's well described, the challenges, with the so-called CAP Theorem.
- 07:11The first thing is that the consistency is established, that every request gets a correct answer.
- 07:20That's availability, availability, here, C, A, that each request actually receives an answer, that nothing gets lost.
- 07:33And then the partition tolerance, that is now the P for this CAP theorem, that the communication between different computers can be interrupted.
- 07:43If these are network connections, they can break.
- 07:47So the challenge for the use of such distributed systems, Ensuring consistency, ensuring availability, and and that partition tolerance is ensured.
- 08:01And now, of course, it is clear that these are three conditions, which cannot be fulfilled at the same time.
- 08:08You can think of that right away. So if there's a line disconnected, and then one place gets something typed in,
- 08:15then it's somehow in a different place, on a different machine. not available, may not be available.
- 08:20So here in this triangle partition tolerance, availability, consistency not all three conditions can be met at the same time.
- 08:29And that's the challenge, that one is confronted with in such distributed systems,
- 08:36and what we're achieving with blockchain technology is, that in a distributed system. for example can support a financial system like Bitcoin.
To enable the transcript, please select a language in the video player settings menu.