• Binary Options Indicator Software Applications
• Top Binary Options Apps - Reviews of Mobile Trading
• Binary Option Indicator Software Applications
• Best Binary Options Signals Provider (Free & Premium ...
• Binary Options Signals - Top Providers Reviewed
##### An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

# An introduction to Linux through Windows Subsystem for Linux

## Introduction and motivation

So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.
• Dual-booting with Windows and a Linux distro
• Will basically involve partitioning your drive and installing Linux from an external bootable USB through your computer's boot menu. You'll get the full Linux experience.
• Lots of Linux flavors to choose from. For beginners, Ubuntu and Linux Mint are generally recommended. I have Ubuntu 18.04 LTS, I'd recommend Ubuntu 20.04 LTS since it's newer, but it's all up to you.
• However, it can be a pain to constantly be switching between operating systems. Maybe you wanna make the full jump to Linux, maybe you don't.
• Life pro tip: if you go down this route, disable Window 10's Fast Startup feature as it will get very screwy with a dual-boot. I've also included a helpful guide in Appendix B.
• Using a virtual machine (VM) to run Linux
• Devours RAM and is generally pretty slow, would not recommend.
• Using terminal emulators
• These provide commands and functionality similar to a Linux terminal, but are still running on Windows architecture.
• These days, the most commonly-used Linux terminal is called bash. bash stands for Bourne Again Shell (no, Bourne is not a typo), and is likely what you'll be using as well.
• Terminal emulators generally don't include a package manager, i.e. you can't download new bash programs, so pretty limited for general usage. BUT you can install a package manager externally, kind of hacky but can work.
• Examples of terminal emulators include PuTTY, Git Bash, msys2 and mingw.
• Using Windows Subsystem for Linux (either WSL 1 or WSL 2)
• WSL provides a compatibility layer for running GNU/Linux programs natively on Windows 10. It has integration features certain Windows 10 development apps (notably Visual Studio Code) as well.
• You've got two options, WSL 1 and WSL 2. WSL 2 was recently released and features a real Linux kernel, as opposed to an simulated kernel in WSL. This means WSL 2 offers significant performance advantages, but still lacks some of WSL 1's features.
• WSL 1 is what I currently use, and thus what I'll be talking about in this guide. I'm not necessarily recommending it, frankly I regret not doing a dual-boot sooner and ditching Windows, but a dual-boot isn't for everyone and takes a lot of time you might not have right now.
• Getting WSL initially setup is easy, but making it run smoothly requires some effort, and some features (like audio playback or displaying GUIs) require workarounds you can research if interested. WSL will also not work properly with low-level system tools.
With that out of the way, let's get started with setting up WSL 1 on your Windows 10 machine.

## Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

### 1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

### 2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
sudo apt-get update
And then
sudo apt-get upgrade
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

### 3. Making things nice and crispy: an introduction to UNIX-based filesystems

The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

### 4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash
Next, press esc to exit insert mode, then type in the following:
:wq
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

### 5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

### 6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

## Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
• bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
• boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
• dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
• etc: no idea why it's called etc, but it contains system-wide configuration files
• home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
• lib: libraries used by the system
• lib64 64-bit libraries used by the system
• mnt: mount, where your drives are located
• opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
• proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
• run: directory for programs to store runtime information.
• srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
• sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
• tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
• usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
• var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

## Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!

##### Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Updating OpenShift
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

# System requirements

## Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

## Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

# Installation

## Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later. The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your$PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release. C:\Users\[username]\$PATH>crc delete
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed. C:\Users\[username]\$PATH>crc version
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located. C:\Users\[username]>crc setup  ## Setting up CodeReady Containers Now we need to set up the new CodeReady Containers release with the$crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The$crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift. C:\Users\[username]\$PATH>crc setup
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start  \ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1 # Configuration It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings. ## Configuring the CodeReady Containers To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are: get, this command allows you to see the values of a configurable property set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being: ○ Shell options ○ Shell attributes ○ Positional parameters view, this command starts the configuration in read-only mode. These commands need to operate on named configurable properties. To list all the available properties, you can run the command$crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration. There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the$crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error. C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help

## Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value. To increase the memory available to the virtual machine, use the$crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs C:\Users\[username]\$PATH>crc config set memory >

## Configuring the DNS

### Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

### macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

### Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

# Accessing the Openshift Cluster

## Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command. It is also possible to view the password for kubeadmin and developer users by running the$crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials

## Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH: C:\Users\[username]\$PATH>crc oc-env
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression  \this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary* To test if this step went correctly execute the following command, if it returns without errors oc is set up properly C:\Users\[username]\$PATH>.\oc
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443 Keep in mind that the$crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443  Step 4 The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command $oc get co
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and$monitoringOperators.

## Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

### Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.

### Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

### Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
• Reclaim
• Recycle
• Delete
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  Step 2: Now you need to clean up the data on the associated storage asset Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition. It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps: Step 1: Get a list of the PVs in your cluster $oc get pv
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'  In this example the reclaim policy will be changed to Retain. $oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}'
In this example the reclaim policy will be changed to Recycle.
$oc patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'  In this example the reclaim policy will be changed to Delete. Step 3: After this you can check the PV to verify the change by executing this command again: $oc get pv

### Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

## User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider: $oc create identity :
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s  Create a useidentity mapping for the created user and identity: $oc create useridentitymapping :
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki  Now were going to assign a role to this new user, this can be done by executing the following command: $oc create clusterrolebinding  \ --clusterrole= --user=
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin  ## What did you achieve? If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things: ● Installing the CodeReady Containers ● Updating OpenShift ● Configuring a CodeReady Container ● Configuring the DNS ● Accessing the OpenShift cluster ● Deploying an application ● Creating new users With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing. # Troubleshooting Nameserver There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command: C:\Users\[username]\$PATH>crc start -n 1.1.1.1
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
4. Click Add. The Select Users or Groups window opens.
5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
6. Click Apply, and then click OK.

# Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

# Sources

##### CLI & GUI v0.16.0.3 'Nitrogen Nebula' released!

This is the CLI & GUI v0.16.0.3 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

# GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE-----

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

You ought to perform the following steps:
2. Extract the new binaries to a new directory of your liking.
3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.

# Release notes (GUI)

• macOS app is now notarized by Apple
• CMake improvments
• Add support for IPv6 remote nodes
• Add command history to Logs page
• Add "Donate to Monero" button
• Indicate probability of finding a block on Mining page
• Minor bug fixes
Note that you can find a full change log here.

# Release notes (CLI)

• DoS fixes
• Add option to print daily coin emission and fees in monero-blockchain-stats
• Minor bug fixes
Note that you can find a full change log here.

# Further remarks

• A guide on pruning can be found here.
• Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use CLI or GUI v0.16.

# Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

# Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

# Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

# Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

# Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

# Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.

51 Bakehouse Rd
Kensington,Victoria
3031
Phone:
0381189320
Website:
Keywords:
vfx binary signals,vfx binary options signals,vfx signals,vfx crypto signals,vfx forex signals,binary option trading signals,vfx software,free binary signals,free binary options signals,binary option signals,binary signal,forex signals live,binary options trading signals,free binary options signals providers,best binary options signals,free signals for binary options,binary options,signals for binary options,binary options signals,tradingsignals for binary options,binary options trading signals,binary options trading,signals for binary options online,free binary options signals,binary options trading on the news
The vfxAlert software provides a full range of analytical tools online, a convenient interface for working with any broker. In one working window, we show the most necessary data in order to correctly assess the situation on the market. The vfxAlert signals include direct binary signals, online charts, trend indicator, market news. You can use binary options signals online, in a browser window, without downloading the vfxAlert application.

##### An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

# Getting Windows Subsystem for Linux running smoothly on Windows 10

## Introduction and motivation

So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.
• Dual-booting with Windows and a Linux distro
• Will basically involve partitioning your drive and installing Linux from an external bootable USB through your computer's boot menu. You'll get the full Linux experience.
• Lots of Linux flavors to choose from. For beginners, Ubuntu and Linux Mint are generally recommended. I have Ubuntu 18.04 LTS, I'd recommend Ubuntu 20.04 LTS since it's newer, but it's all up to you.
• However, it's a pain to setup and to constantly be switching between operating systems, so up to you. Maybe you wanna make the full jump to Linux, maybe you don't.
• Life pro tip: if you go this route, disable Window 10's Fast Startup feature as it will get very screwy with a dual-boot.
• Using a virtual machine (VM) to run Linux
• Devours RAM and is generally pretty slow, would not recommend.
• Using bash emulation software, like Putty or Git Bash
• bash stands for Bourne Again Shell (no, Bourne is not a typo), it's the most commonly used Linux shell these days
• bash emulators generally don't include a package manager, i.e. you can't download new bash programs, so pretty limited for general usage. BUT you can install a package manager externally, kind of hacky but can work.
• Using Windows Subsystem for Linux (either WSL 1 or WSL 2)
• WSL provides a compatibility layer for running Linux natively on Windows 10. It has integration features certain Windows 10 development apps (notably Visual Studio Code) as well.
• You've got two options, WSL 1 and WSL 2. WSL 2 was recently released and features a real Linux kernel, as opposed to an simulated kernel in WSL. This means WSL 2 offers significant performance advantages, but still lacks some of WSL 1's features.
• WSL 1 is what I currently use, and thus what I'll be talking about in this guide. I'm not necessarily recommending it, frankly I regret not doing a dual-boot sooner and ditching Windows, but a dual-boot isn't for everyone and takes a lot of time you might not have right now.
• Getting WSL initially setup is easy, but making it run smoothly requires some effort, and some features (like audio playback or displaying GUIs) require workarounds you can research if interested.
With that out of the way, let's get started with setting up WSL 1 on your Windows 10 machine.

## Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

### 1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

### 2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
sudo apt-get update
And then
sudo apt-get upgrade
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

### 3. Making things nice and crispy: an introduction to UNIX-based filesystems

The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Present Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them, which tells bash that they should be hidden by default. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

### 4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, kinda like an even crummier Notepad, which is a pain to use at first but bear with me and we can pull through. /etc/passwd is a plaintext file that does not store passwords, as the name would suggest, but rather stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash
Next, press esc to exit insert mode, then type in the following:
:wq
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

### 5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//* ~
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the * is a Kleene Star and means "grab everything that's here", and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provides commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

### 6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

## Appendix A: overview of top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
• bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
• boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
• dev: devices, contains information for Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
• etc: no idea why it's called etc, but it contains system-wide configuration files
• home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
• lib: libraries used by the system
• lib64 64-bit libraries used by the system
• mnt: mount, where your drives are located
• opt: third-party applications that don't have any dependencies outside the scope of their own package
• proc: process information, contains details about your Linux system, kind of like Windows's C:/Windows folder
• run: directory for programs to store runtime information. Similarly to /bin vs /usbin, run has the same function as /varun, but gets loaded sooner in the boot process.
• srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
• sys: system, used by the Linux kernel to set or obtain information about the host system
• tmp: temporary, runtime files that are cleared out after every reboot. Kinda like RAM in that way.
• usr: contains additional UNIX commands, header files for compiling C programs, among other things. Most of everything you install using apt-get ends up here.
• var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.

## glimpse into the future of Roblox

Our vision to bring the world together through play has never been more relevant than it is now. As our founder and CEO, David Baszucki (a.k.a. Builderman), mentioned in his keynote, more and more people are using Roblox to stay connected with their friends and loved ones. He hinted at a future where, with our automatic machine translation technology, Roblox will one day act as a universal translator, enabling people from different cultures and backgrounds to connect and learn from each other.
During his keynote, Builderman also elaborated upon our vision to build the Metaverse; the future of avatar creation on the platform (infinitely customizable avatars that allow any body, any clothing, and any animation to come together seamlessly); more personalized game discovery; and simulating large social gatherings (like concerts, graduations, conferences, etc.) with tens of thousands of participants all in one server. We’re still very early on in this journey, but if these past five months have shown us anything, it’s clear that there is a growing need for human co-experience platforms like Roblox that allow people to play, create, learn, work, and share experiences together in a safe, civil 3D immersive space.
Up next, our VP of Developer Relations, Matt Curtis (a.k.a. m4rrh3w), shared an update on all the things we’re doing to continue empowering developers to create innovative and exciting content through collaboration, support, and expertise. He also highlighted some of the impressive milestones our creator community has achieved since last year’s RDC. Here are a few key takeaways:
• Adopt Me! now has over 10 billion plays and surpassed 1.6 million concurrent users.
• Piggy, launched in January 2020, has close to 5 billion visits in just over six months.
• Developers are on track to earn over $250 million in 2020. • In June 2020, developers earned nearly$2 million from Premium Payouts, which rewards them based on the amount of engagement time Premium subscribers spend in their games.
• There are now 345,000 developers on the platform who are monetizing their games.
And lastly, our VP of Engineering, Technology, Adam Miller (a.k.a. rbadam), unveiled a myriad of cool and upcoming features developers will someday be able to sink their teeth into. We saw a glimpse of procedural skies, skinned meshes, more high-quality materials, new terrain types, more fonts in Studio, a new asset type for in-game videos, haptic feedback on mobile, real-time CSG operations, and many more awesome tools that will unlock the potential for even bigger, more immersive experiences on Roblox.

## Vibin’

Despite the virtual setting, RDC just wouldn’t have been the same without any fun party activities and networking opportunities. So, we invited special guests DJ Hyper Potions and cyber mentalist Colin Cloud for some truly awesome, truly mind-bending entertainment. Yoga instructor Erin Gilmore also swung by to inspire attendees to get out of their chair and get their body moving. And of course, we even had virtual rooms dedicated to karaoke and head-to-head social games, like trivia and Pictionary.
Over on the networking side, Team Adopt Me, Red Manta, StyLiS Studios, and Summit Studios hosted a virtual booth for attendees to ask questions, submit resumes, and more. We also had a networking session where three participants would be randomly grouped together to get to know each other.

## What does Roblox mean to you?

We all know how talented the Roblox community is from your creations. We’ve heard plenty of stories over the years about how Roblox has touched your lives, how you’ve made friendships, learned new skills, or simply found a place where you can be yourself. We wanted to hear more. So, we asked attendees: What does Roblox mean to you? How has Roblox connected you? How has Roblox changed your life? Then, over the course of RDC, we incorporated your responses into this awesome mural.
📷
Created by Alece Birnbach at Graphic Recording Studio

## Knowledge is power

This year’s breakout sessions included presentations from Roblox developers and staff members on the latest game development strategies, a deep dive into the Roblox engine, learning how to animate with Blender, tools for working together in teams, building performant game worlds, and the new Creator Dashboard. Dr. Michael Rich, Associate Professor at Harvard Medical School and Physician at Boston Children’s Hospital, also led attendees through a discussion on mental health and how to best take care of you and your friends’ emotional well-being, especially now during these challenging times.
📷
Making the Dream Work with Teamwork (presented by Roblox developer Myzta)
In addition to our traditional Q&A panel with top product and engineering leaders at Roblox, we also held a special session with Builderman himself to answer the community’s biggest questions.
📷
Roblox Product and Engineering Q&A Panel

## 2020 Game Jam

The Game Jam is always one of our favorite events of RDC. It’s a chance for folks to come together, flex their development skills, and come up with wildly inventive game ideas that really push the boundaries of what’s possible on Roblox. We had over 60 submissions this year—a new RDC record.
Once again, teams of up to six people from around the world had less than 24 hours to conceptualize, design, and publish a game based on the theme “2020 Vision,” all while working remotely no less! To achieve such a feat is nothing short of awe-inspiring, but as always, our dev community was more than up for the challenge. I’ve got to say, these were some of the finest creations we’ve seen.
WINNERS
Best in Show: Shapescape Created By: GhettoMilkMan, dayzeedog, maplestick, theloudscream, Brick_man, ilyannna You awaken in a strange laboratory, seemingly with no way out. Using a pair of special glasses, players must solve a series of anamorphic puzzles and optical illusions to make their escape.
Excellence in Visual Art: agn●sia Created By: boatbomber, thisfall, Elttob An obby experience unlike any other, this game is all about seeing the world through a different lens. Reveal platforms by switching between different colored lenses and make your way to the end.
Most Creative Gameplay: Visions of a perspective reality Created By: Noble_Draconian and Spathi Sometimes all it takes is a change in perspective to solve challenges. By switching between 2D and 3D perspectives, players can maneuver around obstacles or find new ways to reach the end of each level.
Outstanding Use of Tech: The Eyes of Providence Created By: Quenty, Arch_Mage, AlgyLacey, xJennyBeanx, Zomebody, Crykee This action/strategy game comes with a unique VR twist. While teams fight to construct the superior monument, two VR players can support their minions by collecting resources and manipulating the map.
Best Use of Theme: Sticker Situation Created By: dragonfrosting and Yozoh Set in a mysterious art gallery, players must solve puzzles by manipulating the environment using a magic camera and stickers. Snap a photograph, place down a sticker, and see how it changes the world.
OTHER TOP PICKS
HONORABLE MENTIONS
For the rest of the 2020 Game Jam submissions, check out the list below:
20-20 Vision | 20/20 Vision | 2020 Vision, A Crazy Perspective | 2020 Vision: Nyon | A Wild Trip! | Acuity | Best Year Ever | Better Half | Bloxlabs | Climb Stairs to 2021 | Double Vision (Team hey apple) | Eyebrawl | Eyeworm Exam | FIRE 2020 | HACKED | Hyperspective | Lucid Scream | Mystery Mansion | New Years at the Museum | New Year’s Bash | Poor Vision | Predict 2020 | RBC News | Retrovertigo | Second Wave | see no evil | Sight Fight | Sight Stealers | Spectacles Struggle | Specter Spectrum | Survive 2020 | The Lost Chicken Leg | The Outbreak | The Spyglass | Time Heist | Tunnel Vision | Virtual RDC – The Story | Vision (Team Freepunk) | Vision (Team VIP People ####) | Vision Developers Conference 2020 | Vision Is Key | Vision Perspective | Vision Racer | Visions | Zepto
And last but not least, we wanted to give a special shout out to Starboard Studios. Though they didn’t quite make it on time for our judges, we just had to include Dave’s Vision for good measure. 📷
Thanks to everyone who participated in the Game Jam, and congrats to all those who took home the dub in each of our categories this year. As the winners of Best in Show, the developers of Shapescape will have their names forever engraved on the RDC Game Jam trophy back at Roblox HQ. Great work!

## ‘Til next year

And that about wraps up our coverage of the first-ever digital RDC. Thanks to all who attended! Before we go, we wanted to share a special “behind the scenes” video from the 2020 RDC photoshoot.
Check it out:
It was absolutely bonkers. Getting 350 of us all in one server was so much fun and really brought back the feeling of being together with everyone again. That being said, we can’t wait to see you all—for real this time—at RDC next year. It’s going to be well worth the wait. ‘Til we meet again, my friends.

# Improving Simulation and Performance with an Advanced Physics Solver

## 05, 2020

by chefdeletat
PRODUCT & TECH
📷In mid-2015, Roblox unveiled a major upgrade to its physics engine: the Projected Gauss-Seidel (PGS) physics solver. For the first year, the new solver was optional and provided improved fidelity and greater performance compared to the previously used spring solver.
In 2016, we added support for a diverse set of new physics constraints, incentivizing developers to migrate to the new solver and extending the creative capabilities of the physics engine. Any new places used the PGS solver by default, with the option of reverting back to the classic solver.
We ironed out some stability issues associated with high mass differences and complex mechanisms by the introduction of the hybrid LDL-PGS solver in mid-2018. This made the old solver obsolete, and it was completely disabled in 2019, automatically migrating all places to the PGS.
In 2019, the performance was further improved using multi-threading that splits the simulation into jobs consisting of connected islands of simulating parts. We still had performance issues related to the LDL that we finally resolved in early 2020.
The physics engine is still being improved and optimized for performance, and we plan on adding new features for the foreseeable future.

## Implementing the Laws of Physics

📷
The main objective of a physics engine is to simulate the motion of bodies in a virtual environment. In our physics engine, we care about bodies that are rigid, that collide and have constraints with each other.
A physics engine is organized into two phases: collision detection and solving. Collision detection finds intersections between geometries associated with the rigid bodies, generating appropriate collision information such as collision points, normals and penetration depths. Then a solver updates the motion of rigid bodies under the influence of the collisions that were detected and constraints that were provided by the user.
📷
The motion is the result of the solver interpreting the laws of physics, such as conservation of energy and momentum. But doing this 100% accurately is prohibitively expensive, and the trick to simulating it in real-time is to approximate to increase performance, as long as the result is physically realistic. As long as the basic laws of motion are maintained within a reasonable tolerance, this tradeoff is completely acceptable for a computer game simulation.

## Taking Small Steps

The main idea of the physics engine is to discretize the motion using time-stepping. The equations of motion of constrained and unconstrained rigid bodies are very difficult to integrate directly and accurately. The discretization subdivides the motion into small time increments, where the equations are simplified and linearized making it possible to solve them approximately. This means that during each time step the motion of the relevant parts of rigid bodies that are involved in a constraint is linearly approximated.
📷📷
Although a linearized problem is easier to solve, it produces drift in a simulation containing non-linear behaviors, like rotational motion. Later we’ll see mitigation methods that help reduce the drift and make the simulation more plausible.

## Solving

📷
Having linearized the equations of motion for a time step, we end up needing to solve a linear system or linear complementarity problem (LCP). These systems can be arbitrarily large and can still be quite expensive to solve exactly. Again the trick is to find an approximate solution using a faster method. A modern method to approximately solve an LCP with good convergence properties is the Projected Gauss-Seidel (PGS). It is an iterative method, meaning that with each iteration the approximate solution is brought closer to the true solution, and its final accuracy depends on the number of iterations.
📷
This animation shows how a PGS solver changes the positions of the bodies at each step of the iteration process, the objective being to find the positions that respect the ball and socket constraints while preserving the center of mass at each step (this is a type of positional solver used by the IK dragger). Although this example has a simple analytical solution, it’s a good demonstration of the idea behind the PGS. At each step, the solver fixes one of the constraints and lets the other be violated. After a few iterations, the bodies are very close to their correct positions. A characteristic of this method is how some rigid bodies seem to vibrate around their final position, especially when coupling interactions with heavier bodies. If we don’t do enough iterations, the yellow part might be left in a visibly invalid state where one of its two constraints is dramatically violated. This is called the high mass ratio problem, and it has been the bane of physics engines as it causes instabilities and explosions. If we do too many iterations, the solver becomes too slow, if we don’t it becomes unstable. Balancing the two sides has been a painful and long process.

## Mitigation Strategies

📷A solver has two major sources of inaccuracies: time-stepping and iterative solving (there is also floating point drift but it’s minor compared to the first two). These inaccuracies introduce errors in the simulation causing it to drift from the correct path. Some of this drift is tolerable like slightly different velocities or energy loss, but some are not like instabilities, large energy gains or dislocated constraints.
Therefore a lot of the complexity in the solver comes from the implementation of methods to minimize the impact of computational inaccuracies. Our final implementation uses some traditional and some novel mitigation strategies:
1. Warm starting: starting with the solution from a previous time-step to increase the convergence rate of the iterative solver
2. Post-stabilization: reprojecting the system back to the constraint manifold to prevent constraint drift
3. Regularization: adding compliance to the constraints ensuring a solution exists and is unique
4. Pre-conditioning: using an exact solution to a linear subsystem, improving the stability of complex mechanisms
Strategies 1, 2 and 3 are pretty traditional, but 3 has been improved and perfected by us. Also, although 4 is not unheard of, we haven’t seen any practical implementation of it. We use an original factorization method for large sparse constraint matrices and a new efficient way of combining it with the PGS. The resulting implementation is only slightly slower compared to pure PGS but ensures that the linear system coming from equality constraints is solved exactly. Consequently, the equality constraints suffer only from drift coming from the time discretization. Details on our methods are contained in my GDC 2020 presentation. Currently, we are investigating direct methods applied to inequality constraints and collisions.

## Getting More Details

Traditionally there are two mathematical models for articulated mechanisms: there are reduced coordinate methods spearheaded by Featherstone, that parametrize the degrees of freedom at each joint, and there are full coordinate methods that use a Lagrangian formulation.
We use the second formulation as it is less restrictive and requires much simpler mathematics and implementation.
The Roblox engine uses analytical methods to compute the dynamic response of constraints, as opposed to penalty methods that were used before. Analytics methods were initially introduced in Baraff 1989, where they are used to treat both equality and non-equality constraints in a consistent manner. Baraff observed that the contact model can be formulated using quadratic programming, and he provided a heuristic solution method (which is not the method we use in our solver).
Instead of using force-based formulation, we use an impulse-based formulation in velocity space, originally introduced by Mirtich-Canny 1995 and further improved by Stewart-Trinkle 1996, which unifies the treatment of different contact types and guarantees the existence of a solution for contacts with friction. At each timestep, the constraints and collisions are maintained by applying instantaneous changes in velocities due to constraint impulses. An excellent explanation of why impulse-based simulation is superior is contained in the GDC presentation of Catto 2014.
The frictionless contacts are modeled using a linear complementarity problem (LCP) as described in Baraff 1994. Friction is added as a non-linear projection onto the friction cone, interleaved with the iterations of the Projected Gauss-Seidel.
The numerical drift that introduces positional errors in the constraints is resolved using a post-stabilization technique using pseudo-velocities introduced by Cline-Pai 2003. It involves solving a second LCP in the position space, which projects the system back to the constraint manifold.
The LCPs are solved using a PGS / Impulse Solver popularized by Catto 2005 (also see Catto 2009). This method is iterative and considers each individual constraints in sequence and resolves it independently. Over many iterations, and in ideal conditions, the system converges to a global solution.
Additionally, high mass ratio issues in equality constraints are ironed out by preconditioning the PGS using the sparse LDL decomposition of the constraint matrix of equality constraints. Dense submatrices of the constraint matrix are sparsified using a method we call Body Splitting. This is similar to the LDL decomposition used in Baraff 1996, but allows more general mechanical systems, and solves the system in constraint space. For more information, you can see my GDC 2020 presentation.
The architecture of our solver follows the idea of Guendelman-Bridson-Fedkiw, where the velocity and position stepping are separated by the constraint resolution. Our time sequencing is:
2. Constraint resolution in velocity space and position space
This scheme has the advantage of integrating only valid velocities, and limiting latency in external force application but allowing a small amount of perceived constraint violation due to numerical drift.
An excellent reference for rigid body simulation is the book Erleben 2005 that was recently made freely available. You can find online lectures about physics-based animation, a blog by Nilson Souto on building a physics engine, a very good GDC presentation by Erin Catto on modern solver methods, and forums like the Bullet Physics Forum and GameDev which are excellent places to ask questions.

## In Conclusion

The field of game physics simulation presents many interesting problems that are both exciting and challenging. There are opportunities to learn a substantial amount of cool mathematics and physics and to use modern optimizations techniques. It’s an area of game development that tightly marries mathematics, physics and software engineering.
Even if Roblox has a good rigid body physics engine, there are areas where it can be improved and optimized. Also, we are working on exciting new projects like fracturing, deformation, softbody, cloth, aerodynamics and water simulation.
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
This blog post was originally published on the Roblox Tech Blog.

# Using Clang to Minimize Global Variable Use

## 23, 2020

by RandomTruffle
PRODUCT & TECH
Every non-trivial program has at least some amount of global state, but too much can be a bad thing. In C++ (which constitutes close to 100% of Roblox’s engine code) this global state is initialized before main() and destroyed after returning from main(), and this happens in a mostly non-deterministic order. In addition to leading to confusing startup and shutdown semantics that are difficult to reason about (or change), it can also lead to severe instability.
Roblox code also creates a lot of long-running detached threads (threads which are never joined and just run until they decide to stop, which might be never). These two things together have a very serious negative interaction on shutdown, because long-running threads continue accessing the global state that is being destroyed. This can lead to elevated crash rates, test suite flakiness, and just general instability.
The first step to digging yourself out of a mess like this is to understand the extent of the problem, so in this post I’m going to talk about one technique you can use to gain visibility into your global startup flow. I’m also going to discuss how we are using this to improve stability across the entire Roblox game engine platform by decreasing our use of global variables.

## Introducing -finstrument-functions

Nothing excites me more than learning about a new obscure compiler option that I’ve never had a use for before, so I was pretty happy when a colleague pointed me to this option in the Clang Command Line Reference. I’d never used it before, but it sounded very cool. The idea being that if we could get the compiler to tell us every time it entered and exited a function, we could filter this information through a symbolizer of some kind and generate a report of functions that a) occur before main(), and b) are the very first function in the call-stack (indicating it’s a global).
Unfortunately, the documentation basically just tells you that the option exists with no mention of how to use it or if it even actually does what it sounds like it does. There’s also two different options that sound similar to each other (-finstrument-functions and -finstrument-functions-after-inlining), and I still wasn’t entirely sure what the difference was. So I decided to throw up a quick sample on godbolt to see what happened, which you can see here. Note there are two assembly outputs for the same source listing. One uses the first option and the other uses the second option, and we can compare the assembly output to understand the differences. We can gather a few takeaways from this sample:
1. The compiler is injecting calls to __cyg_profile_func_enter and __cyg_profile_func_exit inside of every function, inline or not.
2. The only difference between the two options occurs at the call-site of an inline function.
3. With -finstrument-functions, the instrumentation for the inlined function is inserted at the call-site, whereas with -finstrument-functions-after-inlining we only have instrumentation for the outer function. This means that when using-finstrument-functions-after-inlining you won’t be able to determine which functions are inlined and where.
Of course, this sounds exactly like what the documentation said it did, but sometimes you just need to look under the hood to convince yourself.
To put all of this another way, if we want to know about calls to inline functions in this trace we need to use -finstrument-functions because otherwise their instrumentation is silently removed by the compiler. Sadly, I was never able to get -finstrument-functions to work on a real example. I would always end up with linker errors deep in the Standard C++ Library which I was unable to figure out. My best guess is that inlining is often a heuristic, and this can somehow lead to subtle ODR (one-definition rule) violations when the optimizer makes different inlining decisions from different translation units. Luckily global constructors (which is what we care about) cannot possibly be inlined anyway, so this wasn’t a problem.
I suppose I should also mention that I still got tons of linker errors with -finstrument-functions-after-inlining as well, but I did figure those out. As best as I can tell, this option seems to imply –whole-archive linker semantics. Discussion of –whole-archive is outside the scope of this blog post, but suffice it to say that I fixed it by using linker groups (e.g. -Wl,–start-group and -Wl,–end-group) on the compiler command line. I was a bit surprised that we didn’t get these same linker errors without this option and still don’t totally understand why. If you happen to know why this option would change linker semantics, please let me know in the comments!

## Implementing the Callback Hooks

If you’re astute, you may be wondering what in the world __cyg_profile_func_enter and __cyg_profile_func_exit are and why the program is even successfully linking in the first without giving undefined symbol reference errors, since the compiler is apparently trying to call some function we’ve never defined. Luckily, there are some options that allow us to see inside the linker’s algorithm so we can find out where it’s getting this symbol from to begin with. Specifically, -y should tell us how the linker is resolving . We’ll try it with a dummy program first and a symbol that we’ve defined ourselves, then we’ll try it with __cyg_profile_func_enter .
[email protected]:~/src/sandbox$cat instr.cpp int main() {} [email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -Wl,-y -Wl,main instr.cpp /usbin/../lib/gcc/x86_64-linux-gnu/crt1.o: reference to main /tmp/instr-5b6c60.o: definition of main
No surprises here. The C Runtime Library references main(), and our object file defines it. Now let’s see what happens with __cyg_profile_func_enter and -finstrument-functions-after-inlining.
[email protected]:~/src/sandbox$clang++-9 -fuse-ld=lld -finstrument-functions-after-inlining -Wl,-y -Wl,__cyg_profile_func_enter instr.cpp /tmp/instr-8157b3.o: reference to __cyg_profile_func_enter /lib/x86_64-linux-gnu/libc.so.6: shared definition of __cyg_profile_func_enter Now, we see that libc provides the definition, and our object file references it. Linking works a bit differently on Unix-y platforms than it does on Windows, but basically this means that if we define this function ourselves in our cpp file, the linker will just automatically prefer it over the shared library version. Working godbolt link without runtime output is here. So now you can kind of see where this is going, however there are still a couple of problems left to solve. 1. We don’t want to do this for a full run of the program. We want to stop as soon as we reach main. 2. We need a way to symbolize this trace. The first problem is easy to solve. All we need to do is compare the address of the function being called to the address of main, and set a flag indicating we should stop tracing henceforth. (Note that taking the address of main is undefined behavior[1], but for our purposes it gets the job done, and we aren’t shipping this code, so ¯\_(ツ)_/¯). The second problem probably deserves a little more discussion though. ## Symbolizing the Traces In order to symbolize these traces, we need two things. First, we need to store the trace somewhere on persistent storage. We can’t expect to symbolize in real time with any kind of reasonable performance. You can write some C code to save the trace to some magic filename, or you can do what I did and just write it to stderr (this way you can pipe stderr to some file when you run it). Second, and perhaps more importantly, for every address we need to write out the full path to the module the address belongs to. Your program loads many shared libraries, and in order to translate an address into a symbol, we have to know which shared library or executable the address actually belongs to. In addition, we have to be careful to write out the address of the symbol in the file on disk. When your program is running, the operating system could have loaded it anywhere in memory. And if we’re going to symbolize it after the fact we need to make sure we can still reference it after the information about where it was loaded in memory is lost. The linux function dladdr() gives us both pieces of information we need. A working godbolt sample with the exact implementation of our instrumentation hooks as they appear in our codebase can be found here. ## Putting it All Together Now that we have a file in this format saved on disk, all we need to do is symbolize the addresses. addr2line is one option, but I went with llvm-symbolizer as I find it more robust. I wrote a Python script to parse the file and symbolize each address, then print it in the same “visual” hierarchical format that the original output file is in. There are various options for filtering the resulting symbol list so that you can clean up the output to include only things that are interesting for your case. For example, I filtered out any globals that have boost:: in their name, because I can’t exactly go rewrite boost to not use global variables. The script isn’t as simple as you would think, because simply crawling each line and symbolizing it would be unacceptably slow (when I tried this, it took over 2 hours before I finally killed the process). This is because the same address might appear thousands of times, and there’s no reason to run llvm-symbolizer against the same address multiple times. So there’s a lot of smarts in there to pre-process the address list and eliminate duplicates. I won’t discuss the implementation in more detail because it isn’t super interesting. But I’ll do even better and provide the source! So after all of this, we can run any one of our internal targets to get the call tree, run it through the script, and then get output like this (actual output from a Roblox process, source file information removed): excluded_symbols = [‘.\boost.*’]* excluded_modules = [‘/usr.\’]* /uslib/x86_64-linux-gnu/libLLVM-9.so.1: 140 unique addresses InterestingRobloxProcess: 38928 unique addresses /uslib/x86_64-linux-gnu/libstdc++.so.6: 1 unique addresses /uslib/x86_64-linux-gnu/libc++.so.1: 3 unique addresses Printing call tree with depth 2 for 29276 global variables. __cxx_global_var_init.5 (InterestingFile1.cpp:418:22) RBX::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp.:415:0) __cxx_global_var_init.19 (InterestingFile2.cpp:183:34) (anonymous namespace)::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp:171:0) __cxx_global_var_init.274 (InterestingFile3.cpp:2364:33) RBX::InterestingRobloxClass3::InterestingRobloxClass3() So there you have it: the first half of the battle is over. I can run this script on every platform, compare results to understand what order our globals are actually initialized in in practice, then slowly migrate this code out of global initializers and into main where it can be deterministic and explicit. ## Future Work It occurred to me sometime after implementing this that we could make a general purpose profiling hook that exposed some public symbols (dllexport’ed if you speak Windows), and allowed a plugin module to hook into this dynamically. This plugin module could filter addresses using whatever arbitrary logic that it was interested in. One interesting use case I came up for this is that it could look up the debug information, check if the current address maps to the constructor of a function local static, and write out the address if so. This effectively allows us to gain a deeper understanding of the order in which our lazy statics are initialized. The possibilities are endless here. ## Further Reading If you’re interested in this kind of thing, I’ve collected a couple of my favorite references for this kind of topic. 1. Various: The C++ Language Standard 2. Matt Godbolt: The Bits Between the Bits: How We Get to main() 3. Ryan O’Neill: Learning Linux Binary Analysis 4. Linkers and Loaders: John R. Levine 5. https://eel.is/c++draft/basic.exec#basic.start.main-3 Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog. submitted by jaydenweez to u/jaydenweez [link] [comments] ##### binary options trading The vfxAlert software provides a full range of analytical tools online, a convenient interface for working in the broker’s trading platform. In one working window, we show the most necessary data in order to correctly assess the situation on the market. The vfxAlert software includes direct binary signals, online charts, trend indicator, market news, the ability to work with any broker. Also for our subscribers, we offer services for sending signals to telegram messenger and additional analytical and statistical information. You can use binary options signals online, in a browser window, without downloading the vfxAlert application. https://vfxalert.com/en?&utm_source=links submitted by binaryoptionstra to u/binaryoptionstra [link] [comments] ##### Mega Unpopular Opinion: Take-home projects can be great! Ah, I have been debating whether or not I wanted to write this for a while now, but after seeing a few recent threads with 10-50 comments unanimously hating on take-home projects, I figured I would share my opinion. Some of you may not read until the end, so let me preface this by saying not all take-home projects are great. I am on your side in that you should not complete a take-home project if any of the following are true: • The take-home project is the first step of the interview process or if you have not spoken to a hiring manager / technical invidivdual • You do not really want this job / wouldn't be super excited if you landed the position • You already have a job that you are content with and actually don't have the time • And last-but-not-least, and this should really go without saying, if it is OBVIOUS that the company is using you for free labor ... • And a bonus one for the few employers / hiring managers who may be reading this, DO NOT GIVE A TAKE HOME TEST AND GHOST THE CANDIDATE. You should be providing constructive feedback, or at the very least, a formal rejection email / phone call. Shame on you if you ask a candidate to contribute this much time and don't even give them a measly response. With that being said, I will now move on to why I think take-home projects can be great. For starters, it weeds out sooooo much of the competition. If you look at some job postings on LinkedIn, they can have 200+ applicants in 24 hours, and that is not even accounting for people who find the job via other means (i.e. other job boards, company website, etc). Thats a lot of applicants. Now, I know better than to assume that this subreddit is representative of the whole software industry, but clearly a take-home project potentially gets candidates TO WEED THEMSELVES OUT. So 200 candidates may have applied, but now you're competing with a significantly smaller percentage of people who actually wanted to take the time and do the take-home project. Your odds are much better now. Now, I know exactly what you're thinking. You don't want to spend the 8-12 hours it would take to complete this take-home project, and you'd rather spend your time casting your net farther and shotgunning your resume out to more companies, but WHY NOT BOTH? You're the one looking for a job, and you are really not in the position to weed yourself out of potential employment. Some of you people have been on the hunt for a job for months and still won't stoop down to the level of giving a company that much time without being guaranteed another interview / a job. New-flash, doing a project increases your chance of getting a job, just like shotgunning your resume AND you get to practice / show off your programming skills (who knows, maybe mess around a make a project you can put on your GitHub as a sample of your work for other employers to see). On top of this, if you are someone with a lot of free-time - I'm looking at you new grads - and don't have a family/responsibilities that you need to take care of, then you really can't complain about time. Let's face it, instead of doing this project, you're watching Silicon Valley on HBO for the third time "to relax" after a "long day" of filling out the same Workday application forms. Come on, searching for a full time job should be a 40hwk job in and of itself. My next point is that these take home projects sometimes substitute final/on-site interviews. Yea, those 5 hours interviews where you meet every hiring manager and their mother and get grilled round after round because you can't find the optimal solution for sorting a reverse binary search tree that is upside down, flipped, and cooked well done while someone is staring at you, asking questions, and forbidding you from using any resources you would have at your disposable in (almost) any given real world scenario. Yea, those are the real stress-inducing woes of the software interview process, and I would think people would want to avoid those at all costs. Anecdotally, the company that I started working for 3 months ago gave me the choice of a 4 1/2 zoom interview consisting of 4 one hour technical interviews with different hiring managers, or a take-home project that would take 6-10 hours with a 1 1/2 hour follow up discussing my project. The decision was so obvious - stress study an entire week before the interview (hint, this alone probably would take up more time than the take-home project, but on the other hand does prepare you for future interviews) and then endure the torture that is 4+ hours on a zoom call / in an office coding on a whiteboard, or spend about 1-2 hours a day for a week, with access to all resources, leisurely coding up a project, that if done correctly, increases your chance of getting a job astronomically. Not to mention, this option is becoming much more popular with COVID and WFH and the lack of being able to get candidates into the office. All in all, I really wish more companies offered take-home projects as at least an option for their interview process. In my opinion, they are more informative for both parties, as it represents the work you will be doing if you were to get the job, and it is indicative of the level of effort and knowledge you possess in context of the position they are seeking to fill. I really wish everyone on here would stop spreading their hatred for take-home projects, especially to new grads who have never even done them. And for the love of god stop saying to bill the company for making you do a take-home project, that is just the silliest thing I have ever heard, and I DOUBT any company ever would reply to that kind of an invoice. If you really have that much adversity to them, just don't bother. TL;DR: I believe some take-home projects are worth doing ¯\_(ツ)_/¯ submitted by Kixstander to cscareerquestions [link] [comments] ##### boolean # Boolean data type From Wikipedia, the free encyclopedia Jump to navigation Jump to search In computer science, the Boolean data type is a data type that has one of two possible values (usually denoted true and false) which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional) statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type (see probabilistic logic)—logic doesn't always need to be Boolean. ## Contents ## Generalities In programming languages with a built-in Boolean data type, such as Pascal) and Java), the comparison operators such as > and ≠ are usually defined to return a Boolean value. Conditional and iterative commands may be defined to test Boolean-valued expressions. Languages with no explicit Boolean data type, like C90 and Lisp), may still represent truth values by some other data type. Common Lisp uses an empty list for false, and any other value for true. The C programming language uses an integer) type, where relational expressions like i > j and logical expressions connected by && and || are defined to have value 1 if true and 0 if false, whereas the test parts of if , while , for , etc., treat any non-zero value as true.[1][2] Indeed, a Boolean variable may be regarded (and implemented) as a numerical variable with one binary digit (bit), which can store only two values. The implementation of Booleans in computers are most likely represented as a full word), rather than a bit; this is usually due to the ways computers transfer blocks of information. Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations such as conjunction (AND , & , * ), disjunction (OR , | , + ), equivalence (EQV , = , == ), exclusive or/non-equivalence (XOR , NEQV , ^ , != ), and negation (NOT , ~ , ! ). In some languages, like Ruby), Smalltalk, and Alice) the true and false values belong to separate classes), i.e., True and False , respectively, so there is no one Boolean type. In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls), the Boolean data type (introduced in SQL:1999) is also defined to include more than two truth values, so that SQL Booleans can store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can also be restricted to just TRUE and FALSE though. ## ALGOL and the built-in boolean type One of the earliest programming languages to provide an explicit boolean data type is ALGOL 60 (1960) with values true and false and logical operators denoted by symbols ' ∧ {\displaystyle \wedge } 📷' (and), ' ∨ {\displaystyle \vee } 📷' (or), ' ⊃ {\displaystyle \supset } 📷' (implies), ' ≡ {\displaystyle \equiv } 📷' (equivalence), and ' ¬ {\displaystyle \neg } 📷' (not). Due to input device and character set limits on many computers of the time, however, most compilers used alternative representations for many of the operators, such as AND or 'AND' . This approach with boolean as a built-in (either primitive or otherwise predefined) data type was adopted by many later programming languages, such as Simula 67 (1967), ALGOL 68 (1970),[3] Pascal) (1970), Ada) (1980), Java) (1995), and C#) (2000), among others. ## Fortran The first version of FORTRAN (1957) and its successor FORTRAN II (1958) have no logical values or operations; even the conditional IF statement takes an arithmetic expression and branches to one of three locations according to its sign; see arithmetic IF. FORTRAN IV (1962), however, follows the ALGOL 60 example by providing a Boolean data type (LOGICAL ), truth literals (.TRUE. and .FALSE. ), Boolean-valued numeric comparison operators (.EQ. , .GT. , etc.), and logical operators (.NOT. , .AND. , .OR. ). In FORMAT statements, a specific format descriptor ('L ') is provided for the parsing or formatting of logical values.[4] ## Lisp and Scheme The language Lisp) (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume that the logical value false is represented by the empty list () , which is defined to be the same as the special atom nil or NIL ; whereas any other s-expression is interpreted as true. For convenience, most modern dialects of Lisp predefine the atom t to have value t , so that t can be used as a mnemonic notation for true. This approach (any value can be used as a Boolean value) was retained in most Lisp dialects (Common Lisp, Scheme), Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type or Boolean values; although which values are interpreted as false and which are true vary from language to language. In Scheme, for example, the false value is an atom distinct from the empty list, so the latter is interpreted as true. ## Pascal, Ada, and Haskell The language Pascal) (1970) introduced the concept of programmer-defined enumerated types. A built-in Boolean data type was then provided as a predefined enumerated type with values FALSE and TRUE . By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the Boolean type had all the facilities which were available for enumerated types in general, such as ordering and use as indices. In contrast, converting between Boolean s and integers (or any other types) still required explicit tests or function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages which had enumerated types, such as Modula, Ada), and Haskell). ## C, C++, Objective-C, AWK Initial implementations of the language C) (1972) provided no Boolean type, and to this day Boolean values are commonly represented by integers (int s) in C programs. The comparison operators (> , == , etc.) are defined to return a signed integer (int ) result, either 0 (for false) or 1 (for true). Logical operators (&& , || , ! , etc.) and condition-testing statements (if , while ) assume that zero is false and all other values are true. After enumerated types (enum s) were added to the American National Standards Institute version of C, ANSI C (1989), many C programmers got used to defining their own Boolean types as such, for readability reasons. However, enumerated types are equivalent to integers according to the language standards; so the effective identity between Booleans and integers is still valid for C programs. Standard C) (since C99) provides a boolean type, called _Bool . By including the header stdbool.h , one can use the more intuitive name bool and the constants true and false . The language guarantees that any two true values will compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing, arithmetic, parsing, and formatting. This approach (Boolean values are just integers) has been retained in all later versions of C. Note, that this does not mean that any integer value can be stored in a boolean variable. C++ has a separate Boolean data type bool , but with automatic conversions from scalar and pointer values that are very similar to those of C. This approach was adopted also by many later languages, especially by some scripting languages such as AWK. Objective-C also has a separate Boolean data type BOOL , with possible values being YES or NO , equivalents of true and false respectively.[5] Also, in Objective-C compilers that support C99, C's _Bool type can be used, since Objective-C is a superset of C. ## Perl and Lua Perl has no boolean data type. Instead, any value can behave as boolean in boolean context (condition of if or while statement, argument of && or || , etc.). The number 0 , the strings "0" and "" , the empty list () , and the special value undef evaluate to false.[6] All else evaluates to true. Lua) has a boolean data type, but non-boolean values can also behave as booleans. The non-value nil evaluates to false, whereas every other data type always evaluates to true, regardless of value. ## Tcl Tcl has no separate Boolean type. Like in C, the integers 0 (false) and 1 (true - in fact any nonzero integer) are used.[7] Examples of coding: set v 1 if {$v } { puts "V is 1 or true" }
The above will show "V is 1 or true" since the expression evaluates to '1'