top of page

My Next'22 Adventure

Senior year me, for Google Cloud Next'22: my classes, an absentee.


As an organizer for GDG Cloud Twin Cities, I have had the privilege of hosting several study jams and even a Road to Certification workshop series in collaboration with GDG Lawrence. Amongst all the perks of receiving occasional swag from the Google Developers program, I learned about the amazing opportunity of attending Google Cloud Next'22.


Hesitant about whether I was "enough", I didn't think that I deserved the opportunity - I am just a mere Senior at Western Michigan University, with just a few certifications in GCP, and just that much of applicable experience in the cloud and just...


Just an acquaintance to imposter syndrome.


But thanks to that random surge of discontent at 2:00 AM towards my lack of confidence, I booked my flight tickets and embraced my sudden trip to the Bay.


Day of: we started off Google Cloud Next'22 with some funky, 'motivational techno', stock video-esque intro video before transitioning into the video on the Top 10 Cloud Technology Predictions.


Top 10 Cloud Predictions © Elaine Yun Ru Chan 2022

Top 10 Cloud Predictions @ Cloud Next'22

© Elaine Yun Ru Chan, 2022


When you ask someone what are some technologies to look forward to, there could be answers such as Artificial Intelligence, Machine Learning, etc. Then again, that all boils down to the Cloud. So we know that the Cloud is big, and it's personally one of the most exciting things for me, hence the reason why my eyes sparkle when listening to the speakers present their top cloud predictions. But here are some of my favorites...



Opening Keynote


Curated open source to provide a layer of accountability. After my exposure to secure development, I learned about vulnerable packages, and how it was such a pain in the x. The goal was to provide more support for developers, especially in finding vulnerabilities in the automated remediation processes to update them. The inspiration source can be said to be FedRAMP (the Federal Risk and Authorization Management Program: https://www.fedramp.gov/) whereby there is a layer of assessment for all phases of the software delivery pipeline, from preparation to authorization, in addition to the continuous monitoring factor. Through this regulation, Google Cloud came up with the Software Delivery Shield - a fully managed software supply chain security solution (https://cloud.google.com/software-supply-chain-security/docs/sds/overview). Phases and some example components include:


As I mentioned, I had only just discovered all the unpleasantries that come with vulnerable packages in the code base. I had just been exposed to a software called Snyk (https://snyk.io/) which is a scanner that looks for vulnerabilities in your code, open source dependencies, etc., and provides an automatic fix. The only problem that entails with this post-layer security is the need to revert back to the development stage, that is to refactor the code base, which ultimately delays the delivery timeline. As I'm a novice to the secure software delivery pipeline, I had just learned about the importance of having a security layer provisioned at each phase. Thus, it was that "Aha!" moment for me, hearing about these new features, like "Hey, I just learned about that", so I'm glad to see these products unfold before my eyes.


Enhancement for automated security operations. We were introduced to a new methodology, that is CD/CR - Continuous Detection and Continuous Response, an interchange between data visibility and security analysis with response orchestration and continuous feedback. This session was fairly new to me so I was bounded to the need of having to Google these terms and technologies, specifically on the Chronicle service:


Chronicle: a cloud service, built as a specialized layer on top of core Google infrastructure, designed for enterprises to privately retain, analyze, and search the massive amounts of security and network telemetry they generate. It consists of two genres of operations - SIEM (Security Information and Event Management) and SOAR (Security Orchestration and Automated Response).

  • How it works:

    • Collection - data ingestion

    • Detection - OOTB (Out of the box) detections and threat intelligence

    • Investigation - data is exposed by case management, sub-second search, collaboration and contextual mapping

    • Response - automated playbooks, incident management, and closed-loop feedback

  • Resources


All of which boils down to the new feature that was being spotlighted - CSA (Community Security Analysis): an open-source, set of foundational security analytics designed to provide organizations with a rich baseline of pre-built queries and rules that they can readily use to start analyzing their Google Cloud logs including Cloud Audit logs, VPC Flow logs, DNS logs, and more using cloud-native or third-party analytics tools. Just a personal opinion, Open Source anything has a big part of my heart, let alone a security tool. Here's how it works:

  1. In your chronicle instance, our interest lies in the Rules Editor, where you can edit existing rules and create new rules

  2. Pick a YARA-L rule that aligns with your use case (https://github.com/GoogleCloudPlatform/security-analytics#security-analytics-use-cases)

  3. Re-deploy your instance

Resources:



Breakout Sessions


How to build next-level web applications with Cloud Run - Cloud Run allows you to build and deploy scalable containerized apps written in any language on a fully managed platform. If your main niche is to develop applications without the need to hack your way through managing the infrastructure of your app, Cloud Run would be one of your best bets. Here's a great blog post to catch up on in terms of comparisons between GCP services: https://cloud.google.com/blog/topics/developers-practitioners/where-should-i-run-my-stuff-choosing-google-cloud-compute-option. All in all, Cloud Run boasts its service to be simple and automated, with a focus on developer velocity. So this session was all about that - Google Cloud's commitment for Cloud Run to continuously enhance the developer experience.


The biggest push to prod would be their Integrations feature, of which is in its Preview stage. It's a feature to automatically integrate other services with your Cloud Run application with less the hassle to setup its configurations, namingly:

  • Redis - Google Cloud Memorystore

  • Custom Domains - Google Cloud Load Balancing


Starting with the Redis memorystore, the current procedure in place requires for there to be an additional step to setting up a serverless VPC access connector to speak to the Redis instance (https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-cloud-run). With the latest Integrations feature, here's the expedited version of connecting Cloud Run with Redis:

  1. Under the Integrations tab, select Redis - Google Cloud Memorystore.

  2. Configure the name and cache size when necessary.

  3. Enable API list when prompted.

  4. Submit 🎉


As for the Load Balancer integration, it removes the barrier of having to manually setup the configurations needed to attach a load balancer to your app. The current procedure includes but is not limited to reserving an external IP address, creating an SSL certificate resource, creating the load balancer, and connecting the domain to the load balancer (https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless). Now, let's look at the Integrations version of attaching a load balancer to your Cloud Run instance:

  1. Under the Integrations tab, select Custom Domains - Google Cloud load balancing.

  2. Enter the domain and domain path.

  3. Specify the name of the service.

  4. [Optional] Add other domains by clicking Add Item and repeat steps 2 - 3.

  5. Enable API list when prompted

  6. Submit 🎉


Indefinitely, this feature is still in its infancy stage thus the limitations in place. Through this, I discovered the meaning of the Developer Community. At the end of the session, there were a wave of raised hands and follow-up questions. Suddenly, a contact list from the audience was curated for a follow-up session to explore various possibilities to further improve this feature. This contact list stemmed from Developer Advocates, Customers, and more. Heartwarming, really - to see everyone come together to discuss about the future of the feature.


Resources:


How to get involved in open source: a quickstart guide - If there's one thing that's been bugging me, I'd say Open Source. There was always this talk of contributing to Open Source, yada yada... But what do you mean by contributing to Open Source?


I wasn't too much of a stranger to Open Source Software - I see it being heavily utilized in the workspaces around me. But I never came to believe that there would be a place for me as a contributor. Here's why:

  • My first language was English, not Assembly.

  • I don't natively code in Vim because I find it more intuitive.

  • I don't answer Stack Overflow questions for fun.

But you get what I mean. I don't think that I'm a good enough developer to be able to contribute to Open Source. But Tracy, who was the speaker of the session, assured us otherwise.


Let's start off by looking for a tool to contribute to.

The list above shows platforms that compile open source software that you can contribute to, that are mostly hosted on GitHub. As for the last link for This Dot Labs, it's the company that our speaker, Tracy, co-founded, which is a set of open source contributors that believe in mentoring and building the next gen of devs.


After scrolling through the infinite pool of tools, it's time to start thinking about how you'd like to contribute. Say that you already have a feature in mind that you'd like to add to the software, the rule of thumb is to create an Issue and ask to be assigned for the following task. The other way is to personally reach out to the maintainers of the tool and discuss further. But say that you don't know what to start working on - look through the existing Issues of the tool. The tags for the issues are usually very descriptive of what Issue it may be categorized under, for example: Good First Issue for beginner-friendly tasks. After deciding what Issue works best for you, reach out to the maintainer to be assigned to the issue. On a side note, say that you don't feel prepared with contributing code just yet, there's always the documentation aspect. As a developer, I personally know of the struggles of having to document my program, especially as a Wiki page. As Tracy mentioned, you'd be contributing just as much as any other person would, even if it was just typo-fixing, guideline-drafting and so on.


All in all, where there is an open source tool, there is almost always a communications platform for you to reach out to maintainers and/or other contributors. That would always be your go-to to reach out to your fellow peers in this field. The only thing to keep in mind is to remember that maintainers are also human - treat them with respect and they'll indefinitely get back to you shortly.


Personally, I haven't yet found my jive of an open source tool to contribute to, but I'm thankful nonetheless to be able to sit in on this breakout session on Open Source. It helped minimize my fear of this topic, and it was just interesting as a whole to learn more about others' journey into the spectrum, be it conventional or not.


Automation for acceleration: Best practices for Edge, Computer vision and security - The room was entirely packed, and it didn't help the fact I, a 5'2" short-sighted newbie, voluntarily sat towards the back row.


We started off the session with an insight into using the Edge as a solution for bandwidth. The specific example scenario was a monitoring system for a grocery store that utilizes Computer Vision to keep track of the store's stock in real time. Specifically, the idea is that in the event of the store running low on a certain product, the store manager is notified of the shortage and prompted to restock. However, there is an undeniable factor of how expensive this may be, having to send all logging to the cloud. Thus, the solution is to store said logging at the Edge, all while only sending actionable items to the cloud.


On the topic of logging, we explored various automation exploration opportunities, and how Edge computing comes into play, namely:

  • Deployment: Automating app and compute deployment key to success

  • Monitoring: Utilization is more important because of the increased cost of compute at the Edge

  • Alerting: With more static infrastructure, knowing when up-time affected

  • Configuration: Centrally-controlled, declarative Edge compute configuration

  • SC3: Secure Software Supply Chain even more important on the Edge

  • Logging: While logging is crucial, smart logging/event logging reduces the required bandwidth


Details on Automation Acceleration Opportunities © Elaine Yun Ru Chan 2022

Details

© Elaine Yun Ru Chan, 2022


Furthermore, we looked over the Security aspect of the supply chain, and how attack vendors may act at different phases, starting from the development side to build, deploy, and prod. We also looked over a few real-world scenarios, which was one of my favorite discussions of the day. There was a lot of curiosity about the source of these attacks and the remediation processes that took place. At each progression toward another case scenario, the attack vendors' methodologies seemingly become more worrisome - as Iman Ghanizada, Global Head of Autonomic Security Operations at Google mentioned, "Every day on the news, we hear about a new 18-year-old that has breached a company"; Wouldn't you also think that the worst decision a company could ever make is to layoff its entire security team?


This was one of the most wholesome sessions for me, not because I scribbled a bunch of technical terms accompanied by question marks on several pages of my notebook, but because of how thought-provoking it was. The engagement of the audience and the speaker was thoroughly enjoyable, from learning how Google provides services on its Security Operations layer, to how companies can benefit from these fully-managed security layers in place.



Others


A big part of Cloud Next'22 was also the highly-interactive booths in place. On the second floor of the Events Center, booths covering all dimensions of GCP were placed all over, all of which stationed an expert speaker, such as Security, Build, Storage, and so on. I managed to interact and gain a more high-level overview of all the new features in these different aspects of GCP.


Furthermore, there were computers placed in every nook and corner to urge attendees to participate in the #GoogleClout challenge, whereby you're given a challenge scenario in a specific domain, eg. Build, and you are to attempt completion within the time limit (Learn more about the program here: https://cloud.google.com/blog/topics/training-certifications/complete-the-challenge-and-build-cloud-skills). Besides that, a hands-on interactive Cloud Skills Boost corner was set up for attendees to take on specific labs on Cloud Skills Boost that were curated specifically for Cloud Next'22. A successful attempt to complete any of the said labs earns you a monthly subscription to the platform. There was also a specific booth that catered to the DRL Fly Cup challenge, which I admit, was such a sick setup, from full-sized game consoles to a display of various competitive drones (Learn more about the program here: https://thedroneracingleague.com/googlecloud/).


Lastly, all attendees were given a bunch of swag, merchandise, and all. My favorite would indefinitely be the voucher for a certification exam, which brings me back to my next adventure, a new cloud certification (?)



End


In the end, we were invited to watch the DRL competition at PayPal park, but I had already booked my transportation back to my hotel, in preparation for a long haul home the next day.


My biggest regret was that I shy-ed away from being interviewed on what my personal top cloud predictions were, so I'll post them here instead:

  • Accessibility to integration with third-party apps: eg. being able to directly create a Splunk dashboard on vulnerable GKE clusters

  • Automated error-response and recommendations: eg. being recommended possible solutions as to why there is a failure in your App Engine deployment - 'Did you specify the `app deploy` command in your Docker file?'


But as you can tell, my social energy was drained. However, I would do it all over again nonetheless. Because today, I met tomorrow.



Comments


bottom of page