Skip to main content

Video Recap: Introduction to Falcon Full Transcript

BENJI: Okay, hey, so good morning; my name is Benji Oswald. I’ve been down from the University of Idaho, and the goal of this talk is to give you a brief introduction to Falcon. Okay,

[slide change]

let’s cover how to get into an account it’s a fairly straightforward process. We’ll talk about the interface, which is mostly on-demand. I’ll go through a quick demo of how to actually submit a job on Falcon. Jason covered most of the Globus stuff already, but I’ll just mention that again and then answer questions.

[slide change]

So what is Falcon? Falcon is a supercomputer that the Idaho National Laboratory bought back in 2014. They’ve been operating it since they did a pretty big update on it in 2017, replacing all the processors, so it has 932 nodes.

Some of the nodes have failed over time just due to random Hardware failures, but it’s still a lot of nodes. The theoretical capacity is 33,000 total cores. Basically, this thing is an order of magnitude bigger than most Universities in the region’s capacity. So it’s a large system. We built a new back-end file system when we transferred it to University control, and there are 1.3 petabytes of storage. Everybody is allocated a nominal 30 terabytes. We don’t have strict enforcement of those quotas, nobody’s bumping up against that limit yet. If you need more data storage than that though, there’s a process in place where you can make your case to the Falcon operations Committee. Which is a collection of representatives from each of the three Idaho universities. Yeah, any questions on what Falcon is?

[went to next slide] 

Okay, so again, accounts are available to faculty staff students at any of the Idaho Universities. UI, BSU, Idaho State. I’m going to bounce out of this thing now and give you guys the walkthrough

[exited fullscreen slideshow] 

of how to request an account. So you see the big URL up here in the corner docs.c3plus3.org. If you want to follow along as you’re going, you can open that site.

[opens with Safari] 

Which I’m going to do now. Okay, so here is the docs website, and from here, you should be able to find any information you need on Falcon, how to get an account, and all that good stuff. I’m going to go through the request and account link it’s right up here at the top this is going to bounce over to this Moodle instance. If you are at Boise State or University or University of Idaho, you can hit this little CILogon button. So it’s the same authentication mechanism that Globus uses and that we use in other places on Falcon as well. For ISU folks who aren’t in the CI logon the system, just create a new account down here,

[hovered over “Create new Account” button]

okay. So that’s for ISU folks only. So click that guy there

[clicked “CILogon” button]. 

this should bounce you through doing the single sign-on and since I’m already signed in it just bounced me straight into the application

[on the home page].

The account request process is this guy right here

[“Hovering Over Falcon Account Request & Policies” link]. 

If I click on that thing, I’m already enrolled, but if you see this for the first time you’ll need to enroll in that course manually. and then there are three steps; so the first one is just to tell us why you want to use Falcon

[clicked “Research Description” link and clicked “Edit Questions”]. 

I can show you the questions here. Basically, it’s just a description of your research, so what you’re going to do on Falcon, which university you’re from. If you are a student or staff individual then you need to tell us who your Pi is, who’s your professor that you’re working with, and then if you can give us a subject area

[clicked “Subject Area”]

that helps us out too. Just so we know what’s being done on Falcon. The next step

[clicked “User Policy Agreement” link]

is the user policy agreement and so this is set up as a quiz in Moodle so I’ve already taken it a few times. 70 times according to this

[Hovers over Counter]

but I can

[clicked “Preview quiz now” button] 

preview the quiz, and you can see what it is. So the first bit of it is just the user policy and this is a lot of policy information. Read through there.

[scrolling down]

Main points here are highlighted at the bottom as far as questions. Obviously, you should not share your account password and stuff. Tell us if something is going wrong. Don’t put any CUI or other protected data on Falcon right now. So no personal health information, no nuclear launch codes, no nothing like that, okay, and the next one, question four, stayed up there in the policies, but we have this new fancy data storage system. It is not backed up in any way, shape or form. So if you accidentally wipe out all of your data, it’s gone. Sorry, it’s basically a big scratch file system. There’s a lot of it, but it’s not backed up. So just keep that in mind, and then you have to agree to the Falcon user policy, and you hit this finished thing,

[clicked “Finish attempt …” button] 

and you’ll actually click through a few buttons, and I didn’t answer anything, so it’s grumpy.

[clicked “Start Here” button].

Then finally, after you do the user policy, then you hit this little request complete thing,

[hovering over the “Request Complete” Link]

and this is the bit that actually emails the Falcon system administrators and tells us that you completed the account request, and we should create your account now.

[clicked “Request Complete” link] 

Okay, so that’s all that is. After you walk through all that then, we’ll get an email. Me and Joe and Michael and Frank, I’m sure all of us are in the same room for the first time ever, I think. So we’ll see your email, and we’ll create your account, and then you’ll get another email from one of us saying your account’s created, you can log into the system now. Okay, so that’s the accounts bit. That was fun.

[clicked onto original presentation slides and went to next slide] 

There we go.

[centered slideshow] 

Okay, so now, using the system. The URL for this is on-demand c3plus3.org,

[opened Safari]

and I’m going to go back here and show you how to get to that from the docs

[entered “docs.c3plus3.org” in searchbar] 

site. So if you go to the docs landing page here again. You just hit log on, and you’ll get bounced over to the on-demand interface, and again if you’re not, if you haven’t already logged in for the day, then you’ll get bounced through your own University’s single sign-on process, and then eventually end up over here. let’s see, I’m going to try and make this a little bigger,

[expanded Safari page]

and we’ll zoom in touch, maybe okay. So when you log in, you’ll just see our landing page here, but there are lots of nice little interactive ways to

[clicked “Clusters”] 

get to a terminal, for example, if you want to actually just run random commands

[clicked “>_Falcon Shell Access”] 

you hit this Falcon shell access, and you should get already automatically logged into a terminal interface on the on-demand server itself. I’m going to flip back over here

[switched to another Falcon terminal page]

lost my Docs

[switching around pages and opened “docs.c3plus3.org”] 

Okay, so I’m gonna run through just a quick how-to run a job on Falcon, and if you want to follow along on the docs page, go to this little workshop and click the cluster here

[clicks “Cluster” below workshops] 

and this is going to be just a little example of running an R script on Falcon. People like to use R. First step login, and then we’ll get into the R stuff. Particular importance here is that none of the Falcon nodes have access to the internet. Okay, so if you need to install any software, including our packages, python packages, or anything like that, you need to do it from the login node, which is the on-demand node. So we’re going to start that up

[switched back to terminal Page]

oops, this one, it’s my old one that died

[switched to working terminal] 

there we go. The timeout on this thing is like five minutes. So it’s a little short. Keep poking at it if you want to keep your terminal window alive. So to start up R, there’s a module for that thanks to Frank. do module load r, and we can start an R

[typed “R” and pressed enter] 

interactive session, and I’m going to copy-paste from my docs

[switched to the documentation page] 

here. Again to install anything

[copied “install.packages(“stringi”,repos=’htpps://ftp.osuosl.org/pub/cran/’)”] 

you got to be on the on-demand node

[switched back to the terminal] [pasted copied contents and pressed enter] 

and this one is already installed for me so it should just come back immediately. Okay, I’m going to cancel that because it’s already installed and it takes a while actually because it compiles a whole bunch of stuff.

[switched back to the documentation page]

So install your packages and then you can quit out of our at the terminal level there. let me make sure that I didn’t

[highlights libaray(stringi) code]

totally screw that thing up.

[switches back to terminal page]

Okay yeah, I can still load it so we’re good.

[used command: “library(stringi)” then “q()” and then”Save workspace” popped up after “q()” and n was entered] 

Okay, so I’ll quit. Okay, so I’ve installed my R packages. To run stuff

[switches to documentation page]

on the cluster, it’s not Interactive; if you type something in the cluster, it does something. It’s all script based. So we’re going to create our R script and so if you’re following along and want to do this

[referring to the code below “The R script”]

you can just copy and paste all of this. The on-demand interface here has a nice interface for working with files.

[clicked “Files” then clicked “Home Directory”]

Click on that little files link up there. I’m going to make a new home directory

[clicked new directory] 

for all this; call-it workshop.

[scrolls down] 

After I make that, it should appear here.

[hovers over the new workshop directory] 

There’s my workshop.

[clicks on workshop directory] 

I’m going to make a new file

[clicked “New File” at the top of the page] 

to make that R script. I’m going to call it monkey.R

[pressed “OK” button] [switches to R Script Documentation page]

I’m just going to grab all this stuff

[highlights code below “The R script” and copies it]

and

[switches back to workshop directory page]

click here

[clicks three vertical dots and the “Edit”] 

we’ll get into an online editor. paste it in.

[the “R Script” code that was copied].

Okay, so what is this thing doing?

[the online IDE that was loaded] 

I’m sure everybody’s heard the example of monkeys typing on keyboards and if you let them do that for an infinitely long time they’ll produce the works of Shakespeare or something right. So we’re going to use the power of Falcon to try and produce some text just from random input. So those are the packages that I installed the first one just generates random combinations of letters. It’s actually meant for generating passwords and stuff. So each job I run is going to generate a thousand words, and I’m going to run it through different links because I want different lengths of words right? So I’m running it up to 10 and basically what this little command here does

[The code is “wlist = foreach(i=1:maxlength) %do$ stri_rand_strings(nwords, i, ‘[a-z]’)”] 

is it generates a thousand words of each length. So for each up to one through ten, it makes a thousand words. They’re all just lowercase basic words and these are just going to be random character strings right? So we need a way to tell whether that’s a word or not and just not just a random string and so I’ve got a database here

[database access code “real_words = read.table(“~/workshop/engmix.txt”, header = FALSE, sep = “”, dec = “.”)”]

of words let me show you that. Let me save here real quick

[pushed the “Save” button and moved back to the R documentation page on Safari] 

so here’s the link

[Above “The R script” clicked “here” link]

to download these things. So these are not super great quality dictionaries, but they’re out there. This is one

[Hovering over “English Text” link]

with 84,000 words

[Clicked the link] 

let me download that sucker and you can kind of see what’s in there.

[Opened two text files: one with random words and another with bash code]

I don’t really think Three a’s in a row is a word but

[closed dictionary text file]

you kind of pay for what you get or get what you pay for.

[transitioned back to workshop directory]

Okay, so I downloaded that file to my local computer. It’s not on Falcon yet, but I can upload

[clicked “Upload” button next to “New Directory”] 

things easily enough in this interface, so we just hit upload browse.

[clicked “browse files” after clicking “Upload”]

and it’s this guy right here.

[selected “engmix.txt” file in “Choose Files to Upload” and then clicked “Upload 1 file”]

load, and now it’s there.

[Inside of the workshop directory and switched back to the R code page in Safari] 

Okay, and so then back in my R script, I’m reading that file right here.

[real_words = read.table(“~/workshop/engmix.txt”, header = FALSE, sep = “”, dec = “.”)”]

I’m just, it’s just a big list of words I’m just reading it into a table it makes a Big R table out of it. Okay, so the meat of this thing is it generates all my random words and then I need to figure out whether each of those random things is a word and so this these four Loops, Loop over all this word list and then look it up in my table and if it’s a word, then it adds it to a list. And then, after I’m done adding all those words to the list, then it just writes it out right here.

[bottom three lines of “R Script” code].

One thing of note that is pretty important for working on the cluster is having some input outputs so that you can rename files to something unique, and you don’t have a bunch of jobs overwriting the same file every time, right? So if I just wrote my file out to like out.txt every time every job I ran on the cluster would overwrite the same file every time. So this thing

[highlighted “stop(“You must provide an output file name”, call.=FALSE)” code]

reads one argument

[regarded code: “if (length)args) == 0)”] 

from the command line up here which is just a file name that it’s going to spit this file out to, and you’ll see that work here in a sec. Okay, so that’s the art script

[Changed window in Safari to workshop directory file] 

and we need to make another file,

[clicked “New File”] 

and we’re going to call this one monkey.slurm. This is the slurm script

[Clicked “OK” button] 

so the scheduler for the cluster is slurm which is pretty common for HPC systems but slurm is what puts your job into the queue and lets it run. Okay so that’s a blank file

[clicks three vertical dots and the “Edit” for the new blank file] 

we’ll edit that sucker and come back to where’s my and this one there we go okay yeah our script. Okay here’s my example slurm script

[Transitioned to “R Script” documentation page and scrolled down to “The SLURM script” coping its code] 

It’s pretty short. these things all generally need to be bash files

[copied and pasted code into the new directory after transitioning back into the new file IDE]

and I’ll walk you through what this does. So the first line is just the shebang line

[first line: “#!/bin/bash”] 

tells you what program is going to execute this file. Anything that’s commented sbatch right after that is interpreted by slurm as a slurm command line argument. So the only thing I’m really telling slurm, in this case, is I want to put this thing

[code: “#SBATCH -p short”] 

in the short partition. So, Falcon, all the nodes in Falcon are in four different partitions. The only difference between the partitions is how long your job is allowed to run and so you’re allowed to run many more short jobs than you are long job and so that’s why we have different partitions at all. So I’m putting the short thing

[code: “#SBATCH -p short”] 

and then I’m going to change directories

[code: “cd $SLURM_SUBMIT_DIR”]

to the submitter and by default you’re there but I like to ensure that I’m going to load that module because we need to do that every time out there on the nodes, and then I’m going to run this R script.

[code: module load r] 

so by loading module R. The path to this, sector table R script is known

[code: Rscript — vanilla monkey.R m$SLURM_JOB_ID.txt] 

to the system basically, and you can just call R script vanilla. It just kind of keeps our environment clean. It doesn’t save the R environment which is what we want and I’m just going to run that monkey.R file, and I’m passing a file name that I want it to save this file as.

[still referring to the last line of code given] 

So in the file name I’m basically giving it this job ID. So every slurm job is just indexed by a number and if I include that in my file name I won’t overwrite previous jobs every time I submit one of these things.

[pushed “Save” button] 

Okay, so I’ll save that guy,

[changed window to R script file IDE]

and that’s that one.

[changed window in Safari to workshop directory] 

Okay, so we got all the files we need; now let’s see if we can submit this job;

[changed the window to active terminal] 

there we go. You can change to my workshop directory

[used commands: “cd workshop/” then “ls”] 

there are the files there and the command is submitted a slurm job is just s batch and then the script name.

[used command: “sbatch monkey.slurm”]

You can see it came back submitted the job and gave me a number

[50146 is that number] 

I want to see it in the queue and use this squeue command

[used command: “squeue –me”] 

and if I give it the dash me argument then it just shows my jobs. it looks like maybe that already run because it doesn’t take very long.

[used command: “ls”] 

Yes so there’s my output file. If you don’t tell Slurm to name your output files differently then anything that normally goes to the screen or to your console will get put into a file for you. So if we just cut out that file

[used command: “cat slurm-50146.out”] 

and there’s nothing in it because this thing didn’t print anything to screen but it did also produce this list of words hopefully. So let’s see what’s in there.

[used command: “less m50146.txt”]

All right there’s a whole bunch of somewhat dubious words. You can see it doesn’t do well generating longer words it only found two words that had four letters or more and this is just due to not generating enough random words right because there’s a big space word space that you’re kind of poking at with your random generator. So we can improve things by running more jobs right so the kind of the trivial way to do that is just to submit more of these things sbatch, sbatch, sbatch, sbatch, right

[used command: “sbatch monkey.slurm” 5 times generating numbers 50147-50151] 

and that’s just submitting that same job over and over again. If I do squeue –me and these all ran really fast,

[used command: “ls”] 

but you can see, now I have a whole bunch more output

[m50146.txt through m50151.txt were displayed from command “ls”]

files and I should have a whole bunch more text in there. So 501 we’ll do the last one.

[used command: “less m50150.txt” to display content and is displaying content of m50150.txt] 

So a whole bunch more words, this one found a few more four-character words. Okay

[exited file content back to terminal display] 

so that’s basic running a job right? Go back to the new deal here. If I squeue for a sec

[changed window back to terminal]

you can also see everything that’s in the cluster running if you just do squeue

[used command: “squeue”] 

you can see all the jobs that are in there.

[scrolling up through the jobs] 

So there are quite a few but that’s okay there are again 33,000 cores in principle available so we can accommodate a lot of jobs. Another command that’s pretty useful is this sinfo command

[used command: “sinfo”] 

and that looks horrendous but this is basically the state of all the nodes in Falcon right now and separated out by partition too so it really is the same information repeated several times. So I’m just going to go up here to the top

[where the “sinfo” command was ran]

so in the tiny partition you’ve got 189 nodes there that are running a mix of jobs so they’ve got some jobs running on them if they’re not fully allocated. You’ve got about 100 nodes that are fully allocated so they can’t handle any more jobs and then you’ve got about 500 that are just idle or just sitting there waiting for all of you to jump on and poke at this thing. So okay

[changed window back to coding documentation and scrolled down to SLURM script]

back to our monkey typing example. You don’t really want to sit there and submit jobs manually over and over and over again. It’s kind of a dumb way to do things, but there’s a nice way in Slurm to run a whole bunch of jobs repeatedly and that’s with this array jobs

[highlighted array job script and copies it] 

and basically what that allows you to do is just tell slurm I want X number of copies of this job and so we’ll make a new script here

[changes window to workshop directory] 

I’ll call it monkeys.slurm instead of monkey so make a new file

[presses “New File” button and then “OK” button]

and I found all my other stuff

[outputs are displayed as well] 

that I just did edit that guy

[pushed verticle three dots and then “Edit”]

again copy

[Went back to copy slurm job array script and pasted it into new monkeys.slurm script] 

There we go. Okay so this is basically the same submit script pick a partition, cd do the right directory, and load my module, in this case though instead of just the job ID I’m passing it also this slurm array task ID parameter

[line of code in question: “Rscript –vanilla monkey.R m$SLURM_ARRAY_JOB_ID.$SLURM_ARRAY_TASK_ID.txt”] 

And so that just varies with the job array number and you’ll see what happens with that

[clicked “Save” button] 

when I run it. Find my thing

[changed window back to active terminal] 

so same sbatch command I’m just passing it the dash a flag now for an array and I want a thousand monkeys so I’m going to pass it one to a thousand and then my monkeys.slurm script.

[code typed: “Sbatch -a 1-1000 monkeys.slurm] 

Okay so what this is saying is start a thousand copies of this and index them from one to a thousand you can put arbitrarily start this thing wherever you want and it just passes those integer numbers to your job. So if I wanted to start this at 352

[code typed: “Sbatch -a 352-1000 monkeys.slurm] 

or something right to a thousand then it would make about 700 jobs and they’d be indexed from 350 to a thousand. So right, I’m gonna do a one to a thousand.

[used command: “Sbatch -a 1-1000 monkeys.slurm”] 

There we go.

[used command: “squeue”] 

Now if I look at the queue there’s all my monkeys running. lots of monkeys. So it’s running about a hundred now. If I hit this again

[used command: “squeue”] 

let’s see how many I can get to run there it’s running

[scrolling through] 

600 700 800 and looks like maybe I got them all running. There we go so there are a thousand jobs spread out to the cluster doing their thing and now if I

[used command: “ls”]

look in my directory again I’ve got a whole bunch of output.

[displayed files m50152.1.out to 50151.1000.out] 

This is why you make a new directory for each project. It helps you don’t have to go crazy clean out your home directory there’s all my output

[scrolling through the remaining output] 

stuff. let’s see

[used command: “squeue –me”] 

yeah okay they’re all finished.

[changed window to coding documentation and scrolled to “Retrieve our Shakespeare-esq work”] 

Then kind of the tricky part here at the end, if you actually wanted you know to kind of summarize all this stuff; you better use a script or something right because you don’t want to look manually through each of one of those thousand files. It wouldn’t be terribly fun so that’s what this little bash command is going to do.

[highlighting code: “for fn in {1..1000}; do print “%s ” $(shuf -nl m48290.$fn.txt); done”]

You’ll notice back in the files here that each one

[changed window to active terminal] 

is named with a job number and an array index output. So I’m going to use that

[changed window to code documentation]

and I need to replace this job number

[m48290 highlighted for the code just described] 

with the job number that I just did

[changed window to active terminal] 

and paste that in.

[pasted into the terminal: “for fn in {1..1000}; do print “%s ” $(shuf -nl m48290.$fn.txt); done”] 

It’s 50152

[changed 48290 to 50152 before pushing enter] 

so this just grabs one line essentially from each of those thousand files and prints it to screen for me. So there we go. That’s what a thousand monkeys typing a thousand words randomly looks like

[lines of two to three-character words printed] 

it’s not super legible and that, again was just picking a random word from each file. If I do it again it’ll generate a bunch more gibberish. So you can obviously think of some improvements to these monkeys maybe you generate more of the longer words right? So you actually have a chance of getting longer words because I try to generate words up to 10 characters and I didn’t find anything above probably five or six but with Falcon and you can run jobs for a lot longer than you know the few seconds that I ran each of my jobs you can conceivably do that right. okay if you wanted to save this you can pass that to a file. I just need to update my job number so it uses the right things. 50152

[used command: “for fn in {1..1000}; do printf “%s ” $(shuf -n1 m50152.$fn.txt) >> shakespeare.txt; done “]

and then my

[used command: “less shakes peare.txt”] 

Shakespearean work is there for posterity. Okay,

[changed window to code documentation at Python Example] 

so you can do similar things with Python that’s another very popular language to do stuff in on HPC systems. If you are installing things Python packages that aren’t on the system; a virtual environment is the best way to do that and again you need to do that from the head node from the on-demand node. If you try to have the nodes do it out there they’ll just Network time-out every time because they don’t have access to the internet to pull down Python packages and stuff, okay? This is a pretty long Python example this one actually does TensorFlow stuff

[scrolls through up and down]

and I’ll let people walk through that one on their own.

[changed window to workshop directory] 

I’m going to go through a little bit more of the interface here. So you’ve got the file’s interface you can upload and download files. So if I wanted to download our wonderful product of this last thing Let me refresh this

[refreshed workshop directory page] 

so it gets all my stuff there finally went

[scrolls down] 

and this is a thousand files now or several thousand files with all the output. All right I’m trying to find that shakespeare.txt file

[scrolls up]

and I’m going to give up

[on the left of a slurm file a check box is selected] 

and just download a slim file anyway if you want to download something select it and hit download and then it’ll download just like any other thing off the web right. If you want to do more than small files and simple stuff, then Globus like Jason was talking about is the way to go with that. Other things you can do from on demand.

[changed webpage to “ondemand.c3plus3.org”]

We do have an XFCE desktop system working on Falcon now

[clicked on “Interactive Apps” then clicked “XFCE”] 

so if you need to do interactive computing, then we can basically start up a little graphical environment for you. So you can select however many cores you want

[HTML text box element for entering amount] 

I’ll say I want eight cores, you can do up to 36 which is the number on one node. So this is just going to run on one compute node out on Falcon. I’ll say I want eight, and I’m going to launch this thing

[pressed “Launch XFCE” button]

and it’s trying.

[page loading]

Okay, and so then it created the session successfully and now I can launch

[pressed “Launch XFCE” button] 

and attach so now I’m running out on a cluster node and this is a little graphical environment that I can do whatever I want graphically in Linux if you want. Standard kind of stuff. If you want to get out to a terminal and you can get out

[clicked “Applications” then “Terminal Emulator”]

to a terminal,

[used command: “ls”] 

and you can see all your files out there on a node if you want. It’s all the same as what’s on your on-demand blog unknown, but if you have graphical programs Matlab if you want to do like an R session or something; we can probably set that up. If you’re under interactive plots and that kind of thing interactive Python plots and all that stuff. When you’re done with one of these interactive sessions click on your name and log out

[display poped-up pushed “Log Out”] 

and then when I come back to the interactive guy here

[changed webpage back to XFCE Launch] 

eventually, it’ll show that is this session is done. There it went.

[session closed]

So if you want a longer-running interactive desktop obviously pick a different partition other than the tiny or short ones. They’re allowed for all the partitions. If you want to have a kind of a long-running desktop it’s fine the on-demand thing also has

[clicked “Jobs” then “Job Composer”] 

its own kind of built-in job composing Suite tier. So when you do this it creates its own what it thinks is a good file path for this project, creates a job script for you, and kind of helps get you started. The downside of this is that it kind of creates this obscure path to all this stuff

[highlights: “boswald.ui/ondemand/data/sys/myjobs/projects/default” ]

and assumes that you want to only work on this stuff from this web interface and not like bounce back and forth between the command line and the web interface. You still can it’s just kind of more obnoxious to deal with the file paths. let’s see if I

[clicked “New Job” then “From Default Tempate”] 

create a new job from a default template. It creates a new job

[clicked “main_job.sh” changing webpage to a IDE] 

and there’s a really basic bash script in there for your slurm submit script but actually, I guess this is just the job running script.

[went back to the original webpage out of main_job.sh] 

that’s the sh,

[showing submit script]

and there’s yeah so that’s another option for you there.

[opened Powerpont] 

Go back here. Any questions on the on-demand interface or how to poke at Falcon? Let me see if there are any chat questions. I don’t see any chat questions, okay.

[went to the next slide]

Globus,

[current slide is the last slide]

so again, Jason showed most all the Globus stuff.

[closed presentation] 

I’ll just jump in here real quick and highlight the endpoint name again Globus hit log on should bounce you through

[opened Globus and logged-in and clicked on search bar] if you just search for c3plus3

ours is the only one that shows up for c3plus3

[clicked “C3PLUS3 Lustre”]

and that should get you to all of the same files that you can see from that other interface.

[scrolling down]

latest Workshop directory, [clicked on “workshop” folder] and here’s my thousand files or several thousand files in there.

[scrolls down] 

If I wanted to download this whole Workshop directory

[moved outside of workshop directory]

obviously, that’s a pain for most interfaces

[scrolled to “workshop” directory] 

but for Globus, it’s not a problem you just click

[selected the folder] 

download the whole folder and then I can download that whole folder and all those thousands of files all at once; that works pretty nice. okay

[opened PowerPoint presentation]

yeah, done a little early. Questions on Falcon, how to get in there. Real quick let me show you some more stuff on the Dockside

[closed presentation and opened Safari and current webpage is code documentation]

so if you need to get a hold of us

[clicked “contact”] 

if you want to email all of us at once it’s just help at c3plus3.org. That’ll go to all of our systems administrators and we can help you out there. Otherwise, our emails are listed individually. If there’s more information, you want

[clicked “Tutorials] 

so there were a few tutorials in here. Let us know if we can add more documentation.

[clicked “Globus”] 

There’s a decent how-to on Globus

[scrolling through] 

on how to create one of those shared questions that Jason was talking about

[clicked “Stats”] 

and kind of see how busy Falcon is at any one given point on this little stats page. So you see we kind of really kicked off kind of middle of February or so, and now we’ve been chugging along maybe you know 25 capacity or something so there’s again there’s a lot of capacity out there on Falcon just waiting to be used.

NEWSPEAKER(1): Are you using XDMod for this?

BENJI: No this is just querying the Slurm database the Slurm job database.

NEWSPEAKER(1): For the desktop sessions, are you using exclusive, or are those spares if you use the same node?

BENJI: it is a possibility, yeah so if you want exclusive you can create it

[changed webpage to XFCE virtual environment page] 

you can ask

[clicked XFCE] 

for if you pass the dash exclusive

[typed in “Other Args” “–exclusive”] 

and then it’ll give you a whole node to yourself regardless of how many cores you request and the other way is just to request 36 cores and then you’ll get your own node that way too.

NEWSPEAKER(1): Are you in sequence with Slurm

BENJI: It’s a good question I’m trying to remember if we did that for Falcon or not. We did it for we did it on our HPC up on the Moscow campus. We use c groups to enforce Ram limitations and stuff yeah. I also don’t remember off the top of my head if we set it up the same way for Falcon or not. Any questions? Okay, and thank you, and yeah hope to see you guys using Falcon.