Tugas 2 APSI (System Request-SLDC)

SLDC (System Development Life Cycle) merupakan proses terstruktur yang digunakan untuk pengelolaan dan pengembangan sistem atau software komputer. SLDC mencakup serangkaian proses yang dirancang untuk memastikan bahwa pengembangan sistem dilakukan secara terstruktur dan efisien serta berdasarkan kebutuhan bisnis atau pengguna.

Tahapan dalam SLDC adalah sebagai berikut:

Analisis kebutuhan : Langkah ini melibatkan pemahaman menyeluruh tentang persyaratan bisnis atau kebutuhan pengguna. Tim pengembangan bekerja dengan pemangku kepentingan untuk mengidentifikasi dan mendokumentasikan persyaratan sistem. Ini termasuk mengumpulkan informasi, menganalisis kebutuhan dan menyiapkan spesifikasi persyaratan.

Desain sistem : Pada tahap ini dilakukan perancangan sistem yang memenuhi persyaratan. Ini termasuk desain arsitektur, desain user interface, desain database, dan desain infrastruktur lainnya. Tujuannya adalah untuk membuat pedoman yang jelas untuk pengembangan dan penerapan sistem yang diusulkan.

Software development: Tahap pengembangan adalah pembuatan kode program yang sebenarnya. Tim pengembangan menggunakan rencana yang dirancang untuk benar-benar mengimplementasikan sistem. Menulis kode pemrograman, mengembangkan modul, dan menguji komponen sistem untuk memastikan kinerjanya memenuhi persyaratan.

Testing & Implementasi : Proses pengujian terdiri dari memverifikasi dan memvalidasi sistem yang dikembangkan. Pengujian dilakukan untuk memastikan bahwa sistem bekerja dengan baik, tidak ada bug, dan memenuhi persyaratan. Pengujian dapat mencakup pengujian kinerja, pengujian penerimaan, pengujian kinerja, dan pengujian keamanan. Sementara implementasi adalah saat sistem yang dikembangkan siap untuk diterapkan penggunaannya pada lingkungan produksi. Ini termasuk menerapkan sistem ke lingkungan, memigrasikan data, melatih staf, dan menyiapkan infrastruktur yang diperlukan.

Maintenance : Setelah sistem diinstal, proses pemeliharaan akan dimulai. Maintenance melibatkan pemantauan dan perawatan sistem guna memastikan kinerja yang optimal. Pembaruan dan perbaikan penting juga dilakukan selama masa pemeliharaan.


System Request merupakan dokumen formal yang dibuat untuk guna menginisiasi permintaan pengembangan sistem baru atau perbaikan sistem yang ada. Dokumen ini berfungsi sebagai permintaan resmi kepada tim pengembang atau departemen IT untuk mengambil tindakan terkait dengan kebutuhan sistem. System Request berisi deskripsi singkat mengenai masalah atau kebutuhan bisnis yang ingin dipecahkan oleh sistem baru, manfaat yang diharapkan, serta informasi dasar mengenai proyek tersebut.

System Request biasanya berisi informasi sebagai berikut:


Deskripsi singkat tentang masalah atau kebutuhan bisnis.

Tujuan dan manfaat yang diharapkan dari sistem baru atau perbaikan sistem.

Rencana awal mengenai proyek tersebut, termasuk anggaran, sumber daya yang dibutuhkan, dan batasan waktu.

Identifikasi pihak-pihak yang terlibat dan pemangku kepentingan yang relevan.

Dokumen pendukung seperti analisis awal atau studi kelayakan yang telah dilakukan sebelumnya.

System Request menjadi langkah awal dalam proses pengembangan sistem, dan setelah disetujui, akan memicu langkah-langkah berikutnya seperti analisis kebutuhan, perencanaan proyek, dan implementasi sistem.

Berikut contoh system request dari plattform spotify:

Spotify Requirements

Spotify is a large application, but it's core functionality is to be an MP3 player that has access to one of the largest curated libraries of music and podcasts. This means it must be highly available in an international setting for millions of users.


Spotify needs to fulfill the following user requirements:


Account creation and AuthN/Z (Authentication and Authorization)

Audio processing

Recommendations

Fast searching

Low latency Streaming

For system requirements, Spotify must expect to handle:


Billions of API requests internationally

Store several hundred terrabytes of +100 million audio tracks.

Store several petabytes of metadata from +500 million users.

For data alone, Spotify needs to store both user data and data related to business, and this can be an infinitely increasing amount, with current data estimates around 5 petabytes.


3. Software Architectures

A software architecture refers to the blueprint/approach used to build software. Different architectures rely on different standards for building, integrating, and deploying components. There are two common architectures, Monolithic Architectures and Microservices Architectures, with microservices being the most recent architecture.


Monolithic

This is the industry standard of software development, where software is designed to be a single executable unit. This architecture is ideal for applications where requirements are fixed.


In a monolithic architecture, we divide an application into layers, with each layer providing specific functionality:


Presentation Layer: This layer implements the application UI elements and client-side API requests. It is what the client sees and interacts with.

Controller Layer: All software integrations through HTTP or other communication methods happen here.

Service Layer: The business logic of the application is present in this layer.

Database Access Layer: All the database accesses, including both SQL and NoSQL, of the applications, happens in this layer.

We often group layers together, with the Presentation Layer being called the frontend and Controller, Service, and Data Access Layer being grouped into the backend*. This simplifies software as communication between two parties. Any application can be described as a frontend (client) talking to a backend (server).


Dividing applications into these layers led to design patterns like MVC, MVVC, MVP, as well as frameworks that implement them like Spring for Java, .NET for C#, Qt for C++, Django for Python, and Node.js for JavaScript.


Microservices

Microservices builds off monolithic architectures. Instead of defining software as a single executable unit, it divides software into multiple executable units that interop with one another. Rather than having one complex client and one complex server communicating with one another, microservices split clients and servers into smaller units, with many simple clients communicating with many simple servers.


In even simpler terms, microservices splits a large application into many small applications.


The tradeoff between the two is summarized below:


Monolithic Architecture: Complex Services, Simple Relationships. Better for apps with Fixed Requirements (like a Calculator)

Microservices Architectures: Simple Services, Complex Relationships. Better for apps with Variable/Scaling Requirements (like a Social Media application)

Microservices borrows the exact same design patterns and layer methodology as Monolithic architectures, it only implements them with different tools.


Microservices works by integrating the following units:


Frontend

Backend

Content Delivery Network

Elastic Load Balancer

API Gateway

Circuit Breaker

Cache

Service Client

Streaming Pipeline

Services

Databases

system_microservices


Frontend

The frontend is the graphical UI of an application or site that the client interacts with. Webpage frontends have the option being prerendered on a server and sent to a browser (server-side rendering aka SSR) or rendered directly in a brower (client-side rendering aka CSR). Application frontends are usually downloaded (as seen with Desktop and Mobile applications). CSR is more commonplace as UI components can be dynamically rendered and updated with lower latencies. Many frontend UI frameworks exist with many being in JS, but CSR can also be done through other langauges like C#, C++, Java, and more using Web Assembly.


Backend

This is the main application, which handles integrations and APIs with a Controller Layer, business logic with a Service Layer, and data storage and access with a Data Access Layer.


Third party applications/libraries are often used to implement the components in each layer.


Content Delivery Networks (Service Layer)

A CDN or Content Delivery Network is used to solve latency issues when a client loads requested content on a device. A CDN stores static files to be quickly delivered to clients. It is placed on the network edge for services that prioritize delivering content to users.


Load Balancers (Controller Layer)

A load balancer is a specialized server optimized for routing that quickly distributes incoming requests across multiple targets. It is meant to evenly distribute requests across a network's nodes to reduce performance issues. Load balancers can process millions of requests and usually redirects requests to one of many secure API gateways.


API Gateways (Controller Layer)

An API gateway is a server combined with several dynamic route filters that can filter and send batch requests to a specific microservice or service client. It reduces the number of round trips for data to make between services and can also handle functionality such as user authentication and SSL termination (where secure HTTPS connections are downgraded to HTTP connections for faster communication). An API gateway can also handle the same role as a load balancer, but API gateways handle fewer requests at a time versus a dedicated load balancer. For example, load balancers can route up to a million requests per second, while API gateways can only handle up to 10,000 requests per second. Large applications will have multiple API gateways that a load balancer can route to, and API gateways can be used to organize requests, such as requests from mobile devices, desktop devices, and browsers.


Circuit Breakers (Controller Layer)

A circuit breaker is a design pattern where the problems caused by latency from interservice communication can be avoided by switching to backup services in case a primary service fails. Hysterix is a Java library that adds fault and latency tolerance to microservices, acting as a circuit breaker. This is especially useful when using 3rd party microservices, as Hysterix can isolate endpoints and open ports to fallback services.


Service Clients (Controller Layer)

After data is sent from the API gateway and passes through a circuit breaker, it is received by a service client. A service client is a reverse-proxy HTTP server used as a microservice that allows other microservices to communicate with the API gateway.


Caches (Service Layer/Data Access Layer)

A cache is used to speed up responses when querying databases and warehouses, as the service client will first search the cache for the result of a recent response before posting an event on a streaming pipeline. The cache can be implemented with Memcached or Redis.


Streaming Pipeline (Service Layer)

A streaming pipeline is a piece of software that allows services to communicate with one another and process items in transit.


Services are usually hosted with their own IP addresses and endpoints. Using a REST API can enable services to communicate with one another, but it becomes too coupled as every service must have the address to every other service it wants to talk to. Pipelines are a design pattern that allow services to talk to one another without storing information about the sender like an address. Instead a pipeline is a public inbox, where all services can check and see if a message or task exists for them. The pipeline itself is referred to as a message broker.


Streaming pipelines can define the behavior of the message broker as either message- driven architecture or event-driven architecture. Message-driven architectures address a task for a select service to respond too, but any service can view it. Event-driven architectures simply post a task with no address and lets every service that views it decide whether to respond or not. Pipelines function off a publisher-subscriber model, where a service publishes messages or events to the pipeline, and other services can subscribe to the pipeline to see incoming messages and events.


Both Apache Kafka and RabbitMQ are streaming pipelines, but since Spotify relies on a lot of real-time processing and a high amount throughput for its search and recommendation engine, Kafka is the smarter choice to use as a streaming pipeline due to its event-based architecture.


Services (Any Layer)

Services are independent instances of an application, which can be from third parties or developed in-house. Using containerization, we can host these running applications on multiple machines, then use a streaming pipeline to allow them to talk to each other.


Pairing services and a streaming pipeline create the core part of the microservices architecture.


Databases (Data Access Layer)

A database is an application that can store and retrieve data. This software can be treated like any other service and is usually hosted on a separate server from the rest of the backend.



https://course-net.com/blog/sldc-adalah/

https://iq.opengenus.org/system-design-of-spotify/

Komentar

Postingan Populer