[Virtual Presenter] Hi. My name is Jack, and this presentation is based on the SNHU, CS Four Seventy, Cloud Development Project by Vincent Bostic. What we are going to cover today are the processes and technologies we used to develop and deploy a full-stack web application on Amazon A W S..
[Audio] The processes and technologies we are going to cover here today are: Containerization. What is containerization? And how is it used. Orchestration. What is Docker Compose? And why should you use it. The Serverless Cloud: What is Serverless and what are the advantages of serverless computing? API & Lambda. What is Amazon API Gateway and Lambda? And what services do they provide. Databases. What are the differences between MongoDB and Amazon's DynamoDB? Cloud Based Development Principles. What are the details you need to consider when planning a cloud-based application deployment? Security in the Cloud. The principles, methods, and tools for securing cloud-based applications..
[Audio] Cloud computing models. The three cloud computing models are Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). They mainly differ in the level of ion provided to the customer. (IBM 2023) Infrastructure as a service provides physical machine resources or virtual machines where the customer can provision, configure, and operate the servers in much the same way they would manage their own dedicated servers, with the added benefit of scalability without the up-front cost. Platform as a service provides a cloud-based platform that customers can use to develop, run, and manage applications that have been created using tools and libraries supported by the service provider. All computing resources are provided by the vendor, including operating system, database, runtime libraries, and development tools and frameworks. Using PaaS allows developers to collaborate on all their work throughout the entire software development lifecycle and run their application on a scalable platform without having to manage the computing resources. Software as a service is cloud-hosted ready to run application software. The application and all underlying computing resources are entirely managed by the provider. Many traditional desktop applications are now also offered as SaaS services, e.g., Office365 is one. Customers usually pay a monthly or annual fee for as long as the application is needed. The main benefit of SaaS is that it is essentially an on-demand service. All an individual needs to do is create and account and sign up for the service..
[Audio] Containerization. Containerization is a means of packaging an deploying an application that will run consistently on any platform. Applications are packaged with the required dependencies, libraries, and configuration files needed to run them into a single container. Differences in operating system distributions and infrastructure are ed away removing many issues caused by disparate dependency requirements. (IBM 2023) Containerization is a good choice when you need to run applications with a high level of isolation, portability, and efficiency. Compared to virtual machines, containerized applications are also much faster to deploy and run. Applications can be quickly deployed and spun up without the requirement of installing and booting up an operating system with each deployment. Containers are also ideal for creating modular applications built up as micro services. Complex applications can be developed as individual containerized modules that can be added, removed, and modified as needed without having to rebuild the entire application..
[Audio] Migrating applications to the cloud. There are several options available when migrating a full stack application to the cloud. (Berardi 2023) (Altynpara 2021) The first and easiest is redeployment. Here, we simply deploy our application to a cloud hosted virtual machine or vendor managed servers. Doing so requires little or no changes to the application code. This typically is the simplest approach but also the most costly in terms of server and software maintenance. Re-platforming. This involves rebuilding our application using the resources provided by the cloud hosting service. This can include databases and other SaaS applications. This approach removes much of the concern over maintenance and security of the application server when services are managed by the vendor. Refactoring. Refactoring is a good option when parts of your application cannot be duplicated with SaaS components. Building our full-stack application. The initial deployment of our application was on our local machine and was built using Docker containers for the Angular front end, the backend API with Loopback, and the MongoDB database. We then used Docker Compose to orchestrate these three containers to function as one full stack application..
[Audio] Docker Compose. Docker Compose is a utility for building and managing platform-agnostic container-based applications. Such an application is designed as a set of containers (components or services), each performing its own functions and communicating with other containers through networks. Containers store and share persistent data into Volumes and volume data is preserved when containers are created. (Takács 2018) (Docker 2023) With Compose you define your application containers in a single static YAML* file, and run a single command to update the application all at once. The YAML file can be version controlled, allowing you to roll back changes easily. For a single container application, Compose can provide a tool-independent configuration in a way that a single Docker file cannot. Configuration settings such as volume mounts, port mappings, can all be declared in the Compose YAML file. Docker with Docker Compose greatly simplifies the software development lifecycle. Separation of concerns through containerization allows changes to be deployed and scaled independently. Compose works in all development environments: production, staging, and testing. The application stack is defined in a single version-controlled file in the project root directory so Developers can contribute independently by using the Compose app. Since Docker provides and isolated environment, conflicts with other applications or dependencies are reduced. *YAML is an easy to read and easy to use data formatting language alternative to JSON or XML..
[Audio] Serverless Computing. In a serverless computing environment you don't have to manage servers or maintain virtual machines. All the developer needs to do is write their application code and deploy it to a container managed by the service provider. The service provider manages the rest, including provisioning the infrastructure needed by your application, and scaling up and down on demand as needed. (IBM 2023) The advantages of serverless computing are freedom from server maintenance, configuration, server security management, capacity planning, and system monitoring. With serverless computing you never pay for idle capacity. Billing starts when your code execution begins and ends when execution completes. Pricing is based on execution time and resources required. Amazon S3. Amazon S3 is a subscription-based cloud storage solution. With S3, you select a region where you want your data stored, and create a bucket to begin storing data. There is no minimum fee and no setup cost, and you only pay for what you use. There are different storage classes available for different use cases. These include Standard, which provides the high-performance and high availability needed for cloud applications, and Glacier, which is suitable for infrequent accesses like data archives and long-term storage. A unique feature of S3 is the redundancy provided. By automatically copying objects to multiple devices, and multiple Amazon facilities, S3 provides the capability to preserve and retrieve every version of every object stored in an S3 bucket. There are many advantages of using S3 over local storage. The downsides of local storage are the security, infrastructure, and systems management responsibilities that go along with hosting your own servers. For a small, isolated organization that doesn't deliver content that may be fine. But for larger organizations that have unpredictable service loads, and require near-constant availability, managing local storage can be unpredictable and costly. Not having enough can create problems like service interruptions or data loss. Having too much simply adds to the cost. There's also the cost of additional IT staff required to manage the operation..
[Audio] Amazon API Gateway and Lambda. Amazon API Gateway is an Amazon AWS service for developing, deploying, maintaining, and securing REST, HTTP, and WebSocket APIs that expose backend endpoints, AWS Lambda functions, or other AWS services. (Amazon, 2023) Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure. API Gateway handles all the tasks involved in traffic management, authorization and access control, throttling, monitoring, and API version management. API Gateway provides a pricing model based on API requests. There are no minimum fees or startup costs. Security controls provided by API Gateway include AWS Identity and Access Management, and Lambda authorizer functions. Amazon Cognito, OAuth2, and native OIDC are also supported. API Gateway creates RESTful APIs that are HTTP-based, enable stateless client-server communication, and implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. API Gateway creates WebSocket APIs that adhere to the WebSocket protocol, that have stateful, full-duplex communication between client and server, and route incoming messages based on message content. API Gateway supports containerized and serverless workloads, as well as web applications. AWS Lambda is an event driven, serverless computing platform that runs your code in response to events. In our full stack application, we created Lambda functions to execute the database operations, GET, POST, PUT, and DELETE, that we need to interact with our database..
[Audio] MongoDB, and DynamoDB, are two popular high-performance, No SQL database solutions with major differences in design and applicable use cases. (DeBrie, 2020) (IBM, 2023) (Amazon 2023) Amazon DynamoDB is a serverless database service available only on Amazon AWS. It is a single-table database that stores data records as a table of items where each item is a collection of attributes. Primary keys are used to uniquely identify each item. The primary reason for a single table design in DynamoDB, is it allows retrieval of multiple, heterogenous item types using a single request. This reduces the number of requests for a particular access pattern. DynamoDB is fully managed by Amazon Web Services, including provisioning, configuration, replication, and scaling. DynamoDB offers encryption at rest, and on-demand backups for sensitive data protection. DynamoDB was designed for on-line transaction processing where high speed, high velocity data access is required and only a few records are processed at a time. It is ideal for serverless applications using AWS Lambda, that require low-latency data access where data analytics is not a primary concern. MongoDB is an open source, general-purpose, NoSQL database that can be implemented as a stand-alone server, local or cloud-hosted containerized application, or fully managed service through MongoDB Atlas. It is available for macOS, Windows, and most Linux distributions, and provides drivers to interface with most popular programming languages. For stand-alone installations, the responsibility for database management, security, scaling, and implementation is the owner's responsibility. In MongoDB, each data record is a JSON document consisting of key-value pairs, and each document is assigned a unique ID upon creation. MongoDB is ideal for applications that require flexibility in creating data records, aggregation of record queries, and analysis of large data sets..
[Audio] Full stack application deployment. For our full-stack application deployment, we created the Lambda functions necessary to query our data base. Then created the APIs needed to connect our Lambda functions to our database through Amazon API Gateway. We then created and ran test scripts to ensure the Lambdas operated as expected. Integration of our front-end Angular application to the back-end API function required manually enabling Cross-Origin Resource Sharing or CORS. To do this required the addition of an Options method within our API structure. Once this was completed, we only needed to modify a few lines in our Angular application for us to have a full-stack web application deployed on Amazon AWS..
[Audio] Cloud Based Development Principles. Two computing principles that differ between serverless computing and traditional server architectures are elasticity and services billing. With Amazon API Gateway, Lambda, and S3, applications scale automatically without the need to provision and decommission services as load conditions change. Services are billed only for the computing resources as they are used. With traditional server architectures, capacity must be planned and deployed ahead of time. Services are billed for system up-time regardless of use. Traditional servers have a much greater chance of being overloaded, causing service delays or interruptions, or if underutilized, they may sit idle wasting resources..
[Audio] Securing your cloud hosted application. Security on AWS is shared between AWS and the customer. AWS is responsible for the security of infrastructure, including hardware, software, networking, and facilities. The customer is responsible for the security of their data, and depending on the services used, infrastructure that is not managed by AWS. Access management, permissions, and policies. Identity and Access Management (IAM) provides tools for Identity Management - for managing both human and machine identities, and Permissions Management, for controlling what human and machine identities have access to, and what they can do. Human and machine identities are required in order to access AWS resources whether through the web, application, or command-line tools. IAM only works with AWS services that are integrated with IAM. The first step in access management is defining access requirements, who inside and outside the organization, and where - internally, externally, or public. Two-factor authentication should immediately be setup for this account along with recovery options. The root account should only be used for tasks that require it. Additional accounts should be setup as IAM users and given the least privileges and access to only what is required. IAM roles, similar to IAM users, can also be setup to define permission policies that are assigned and revoked as needed. They are not uniquely associated with one person or group. API Security. APIs should be secured using IAM tools, and should employ AWS data encryption solutions and logging services. Additionally, any communication should be encrypted with SSL/TLS and use validated cryptographic modules when accessing AWS through the command line or other API..
[Audio] There are many options when it comes to Cloud computing. You can lease physical servers, deploy virtual machines, deploy containerized applications, or deploy your application code in a serverless environment. You can also have hybrid combinations of these. Containerization is an easy and effective way to deploy applications and relieves you from the difficulties related to platform dependence. Serverless computing provides the option to deploy your code without having to manage servers or network infrastructure, or worry about scaling or idle resources. What they all have in common is the necessity to secure both the data they process and the infrastructure they operate on. Migrating our Angular application from a static web site, to a containerized application with Docker Compose, then to a serverless architecture on Amazon AWS, provided a great experience in deploying a full-stack application in the cloud. That is all we have for today. Thank you for watching..