CIQ

Origins and Changes to Singularity and Apptainer

May 5, 2022

Webinar Synopsis:

Speakers:

  • Zane Hamilton, Director Sales Engineering at CIQ

  • Gregory Kurtzer, CEO at CIQ

  • Ian Kaneshiro, Software Engineer at CIQ

  • Cédric Clerget, Software Engineer at CIQ

    • N/A

Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

The Apptainer Project [00:00]

Zane Hamilton:

Hello and welcome to another CIQ webcast. Good morning, good evening, and good afternoon,  wherever you are. We appreciate you spending the time and joining us. If you're new to us, welcome. We hope that you get something out of this, and if you're returning, as always, add questions as we go and we welcome you back. Today, we're going to be talking more about Singularity and Apptainer, mainly Apptainer. We spent a lot of time on this, but I think it's going to be good that we dive in a little bit more and learn the history of it and where it has come from and where it's going. Today we have Cédric, Ian, and Greg. Welcome.

Gregory Kurtzer:

Hi everyone.

Zane Hamilton:

So if you guys don't mind introducing yourself. Greg, as always, I think we know who you are. I'll let you go last. Cédric, tell us who you are and what you do here.

Cédric Clerget:

I'm a software engineer at CIQ. Before that, I worked on the Singularity project, and now Apptainer.

Zane Hamilton:

Very nice. Thank you. Ian, you've been here before, but tell us about yourself again.

Ian Kaneshiro:

So I'm also a software engineer at CIQ, I've also worked on and I am one of the maintainers of the Apptainer project.

Zane Hamilton:

Thank you. Greg?

Gregory Kurtzer:

Hi, I'm Greg. I'm excited to be here with Cédric and Ian because this represents the lion share of most of the critical and feature development that has occurred over the life of the Singularity project  – it really is us three. There are several more individuals – they've gone off and are doing different work in different organizations – but the lion share of this work has been us three.

Zane Hamilton:

That's fantastic. And it's exciting for us to be able to spend time with the three of you together. Before we go any further, Greg, I want to go ahead and tell you that if I suddenly disappear magically, I'm going to leave it to you for a while. We have thunderstorms and I've got lightning within a mile of me and my internet keeps flickering, so exciting times.

Gregory Kurtzer:

I can not replace you, man. No way. It’s going to suck without you.

Singularity [02:12]

Zane Hamilton:

So first of all, I would like for you guys to just tell us a little bit about how long has Singularity been around and, a follow-up to that, why was it created? 

Gregory Kurtzer:

I could take that one on. So Singularity has been around for five or more years, maybe even close to six years now. It was created way back out of high performance computing necessity to bring in containers into the ecosystem. At this point, Docker was taking off and kind of revolutionizing containers in the rest of the ecosystem and enterprise focused workloads, microservices, and whatnot. And they did such a great job in terms of getting containers out there and making them a piece of our core infrastructure, that researchers started to get wind of this. Researchers started to realize how important containers could be for a lot of what we're doing in high performance computing. But there was a little bit of a disconnect in terms of the architecture that we use for traditional high performance computing.

We've been using a fairly flat and monolithic architecture called the Beowulf, where pretty much every HPC system has been built on this same model where users log in via SSH, they bring over their applications, they compile their applications, they test and optimize their applications, and they bring over their data. Then they work with the batch scheduling system. This is a very manual process where people are logged in over SSH and they have shell accounts and then they're going to be running jobs as themselves on these computers. Then the batch system will actually run the job across hundreds or thousands of compute nodes. What we end up with is kind of an architecture where you have non-privileged users running on these systems kind of like we used to run decades ago.

And that's the standard mode of operation in terms of how we operate with these HPC clusters. So Docker, at this point, focused on having a daemon that would run, which would run the containers on behalf of the users who wanted to run containers. But that was a daemon process that ran its route as a privileged user. For us to install Docker on HPC, we would have to give users access to this route running daemon, and potentially give users the ability to manipulate and control that underlying cluster at the root level. Obviously this is a no-go. Whenever users get access to anything that has privilege, we usually call that a security incident. We basically had to figure out, “How do we give users and researchers the ability to run containers in a way that aligns with a traditional HPC-focused architecture?”

I prototyped Singularity and the initial idea, going to the point where it just took off within a year. I started hearing about it being run on almost every major supercomputer that I knew of. And it just took off because we solved such a big pain point in the industry in terms of, “How do we support containers on these big HPC supercomputers?” From there, Cédric and Ian joined the project, as well as many other people in the community. Both Ian and Cédric especially, but people much smarter than I, were able to take this idea that I put out there and then be able to transform it into something that was just amazing.

Just as an example of that, when I first wrote it, there were two versions that I coined. Version one, which was actually a little bit more like Snaps, if you're familiar with Snaps. Version two was a container subsystem written predominantly in C with a little bit of shell code around the outside of it. That was predominantly written and maintained by me. We ended up having some external contributions from the community and amazing contributions to make it more OCI friendly. At that point, we were able to coordinate and be more compatible with Docker. But several others, Cédric for example, then was able to take that and rewrite it and reimplement it in a more security friendly way, in a more scalable way, and in a more professional way.

They rewrote it in Go, and version three was released. We introduced some additional things, some additional pieces from other people in the community, and other members of our organization, that we were able to bring in things like the Singularity image format and other things. Cédric was responsible for pretty much architecting and building the entire application set of Singularity v3. I look at the code and it's so far beyond me right now and I can't even help. It’s so sad. I look at it and I'm like, I don't even know where to contribute code to. But it is a fantastic rewrite and reimplementation of the code.

Ian took all of the pieces that I made out of the build architecture to build containers and whatnot. Ian brought all that forward in terms of moving that into Go and rewriting all that, making that better, and taking it out of Shell, because I had most of that in Shell, and actually writing that into Go and compiling that. So between the team you have here, you pretty much have the lion share of the predominant remaining developers focusing on features and forward movement. There are more that definitely deserve mentioning. And I'm going to bring out Dr. Dave Dykstra. I always call him Dr. Dave because it’s either his Git or his Slack. Dr. Dave is a computer scientist at Fermi National Lab and he has been part of the community since very early on, and has been a tremendous member of the community.

We have got Christian Mariki, who's our release manager, as well as lots of people over the duration of the project that have really contributed and made this project just amazing. Just to name drop a few more: Vanessa has been fantastic in terms of bringing in initial OCI. Oh my goodness, I'm going to kick myself. In about two minutes, as soon as I hand the mic back over, all these names are going to flood back into my head. Anyway, I'm going to pass the mic back because I think I'm done.

Zane Hamilton:

That’s fantastic, thank you for sharing that. Whenever we talk about HPC, you start thinking about scale and how big these things can get. In those early versions of Singularity, what was the scale expected to be and what did it get to?

Gregory Kurtzer:

Oh my goodness. One of the biggest supercomputers in the world, Fugaku, in Japan, is running Singularity. By the way, Michael Bauer, Dave Trudgian, Eduardo, Dave Godlove, Yonik are all piling back in my head. Those are some of the previous contributors that have been part of this project. What’s awesome is if you go back and look at the contributors, even though I haven’t touched it in a couple of years now, I think I'm still the leading contributor to the project.

Starting to Use Singularity [11:06]

Zane Hamilton:

Yeah, I think I was just verified too. There you go. So what were some of the initial use cases for Singularity? I mean, I know we talk about how everything was getting exciting around containers. People were starting to implement containers, and CICD came along with making sure that it was easy-to-deploy code, easy-to-deploy applications, but what were some of those initial use cases in HPC that really demanded or drove that container conversation forward?

Ian Kaneshiro:

So I can jump in on this one. For the scientific community, the key here was actually reproducibility of scientific computations. So scientists will develop a model and a set of applications essentially on one system, and then want to share that application with their colleagues that are working on potentially different systems that have different versions of operating systems installed, different tooling installed. And they want to be able to package up those applications and run them on a different system without worrying about what is typically called dependency health, where they have to go and talk to administrators to figure out all of the dependencies that need to be installed by system administrators in order to run this application and verify the results of their colleagues. So that was the core use case, which were some of the first requests for containerization. I think that was kind of the main driver for the initial investigation into building Singularity and just containerization in the HPC space and looking at what the current options were, and what was lacking in some of those options, and what needed to be built in order to satisfy that need.

Zane Hamilton:

That's excellent. Thanks, sir.

HPCs in Singularity [12:47]

Gregory Kurtzer:

There's been a desire to increase the diversity of workflows and job types supported by High Performance Computing systems for almost as long as I can remember. When we first got involved in High Performance Computing, HPC meant something very specific – very tightly coupled MPI-focused applications that were designed to scale massively large. But we created these gigantic systems to support those. But what do you do when those jobs are either waiting for other jobs or they're not fully utilized? You have a whole bunch of computing space. We always wanted to backfill that computing space with other types of jobs. For the most part a lot of people's jobs would fit on those systems, but we saw the general population of what we anticipated as typical workflows. Typical users are like high energy physics, computational chemistry, and all of these sorts of things. And then all of a sudden we saw political science wanted an account. We saw library services – the library wanted an account. And I mean, there were all of these very different use cases. So many of them just started saying, “Well, you know, I don't want to run on an older operating system. I want to run on the latest Debian, or I want to run on the latest, you know, whatever. I have a container that I already made and I just want to run inside that container. Why can't I use that container?” And so there was a lot of effort and emphasis put on how we can better enable this – what we called at the time – this long tail of science. And again, containers were incredibly valuable for that.

HPCs in Apptainer [14:38]

Zane Hamilton:

That's very cool. Ian, you have something to show us here a little bit about wanting something that's not necessarily there and being able to break out of that dependency? I hope I'm right. One of the other things I wanted to ask is, “Are there use cases outside of HPC that Apptainer is being used for today?”

Gregory Kurtzer:

I could take that one if no one else is going to step forward.

Gregory Kurtzer:

We're seeing a lot of interesting workflows and use cases for Singularity and Apptainer. By the way, we didn't really talk about the relationship between Singularity and Apptainer at this point. Maybe I'll see if Ian or Cédric wants to take that. But in terms of the use cases, it really is designed as a container solution for traditional HPC, and that's really its primary focus in terms of why it exists. When we decided to move it into the Linux Foundation and rename it, which was a request by the Linux Foundation, we put it out to the community, but Apptainer ended up winning that name. The reason why is because what we created is not an HPC-specific solution; it is an application container solution.

I make a very distinct point between a service-focused container, which is predominantly designed to run in the background, versus an application-focused container, which is predominantly designed to run in the foreground. It seems like a somewhat trivial split, but it's actually incredibly important. For example, if you want to run an application that exists inside of a container, but that application is a graphical environment, that application is going to be working very closely with your underlying file system, your home directory, other resources like GPUs, FPGAs, InfiniBand, and maybe direct access to file systems. All of the sudden when you're thinking about the difference between a service-focused container versus an application-focused container, the application container actually has some real interesting use cases that you want to be thinking through. Apptainer and Singularity are really designed for those sorts of applications rather than services. Can it do services? Sure. You could. Can Docker and other container systems do applications? Sure. But one is designed predominantly for services and one is designed predominantly for applications. So in my mind, you want to use the right tool for the job in all cases. That's what Apptainer and Singularity are best for: applications.

Is SIF Changing? [17:59]

Zane Hamilton:

That's very good and I believe we do have a question. The question is, “Are you going to change the image format name from SIF to something else? AppIF or AIF, is there a desire to change that?”

Ian Kaneshiro:

I can take this one. As a part of the migration from Singularity to Apptainer, we've made the decision to keep the same singular image format for the images built and used by Apptainer, which is the 1.0 version. The primary reason for this is because we understand that there's going to be a period of transition between people using older versions of Singularity and getting onto Apptainer as a new tool on a fresh install. We want to make sure that we don't have Apptainer build containers that would not work on older versions of Singularity, specifically versions of 3.x Singularity. So we don't have any plans of making Apptainer or image format for the 1.0 version of Apptainer. Traditionally, this project will create a new major version when it does make a breaking change to the image format. So that would be something that would come out in potentially Apptainer 2.0

Other Uses for Apptainer [19:12]

Zane Hamilton:

Thank you very much for the question, John, as always. Welcome back by the way. We've talked about what Apptainer is good at, but I think there's something that we've missed in this that we don't talk about. What are some of the things that Apptainer would be good at, that maybe people aren't thinking about or not using it for today, that maybe they should be? Anyone can take that one that has an idea. 

Gregory Kurtzer:

There's a couple areas that, due to the image format and the problem set that we set out to solve, we did some things differently with Singularity and now Apptainer than the rest of the container ecosystem. One of those, as an example is, and this is related to the question that just came up, which has to do with SIF and that's the singularity image format. Most container images that we're thinking about are usually in the format of what's called OCI, and OCI – Open Container Initiative – has a format which is a layered format of tarballs. Those tarballs are stitched together via metadata and a manifest. And we use this manifest to pull all the tarballs, all the layers of a particular container, and we assemble them all at run time. So when somebody wants to run in a container, we first download all the layers, then we assemble them, and then we run inside of that image. 

Now at Singularity, we had to approach this very differently because for tarballs to be used, they have to be cached. And the last thing we wanted to do if we're running a job across a thousand nodes is cache all of the same data and then have to stitch all of those together from the same state directory. It gets incredibly complicated at that point so we wanted to do something that is optimized specifically for the architecture of an HPC system, knowing that HPC systems are designed around two facets. The first one is the compute requirements and the second one is the file systems. No matter what kind of a system you're building, whoever's architecting that system has to build a compatible and capable file system for whatever size system that they're creating. It made a lot of sense to leverage this shared file system for the container images themselves, and then to be able to do some fancy trickery to allow that file system to actually be optimized for running on some sort of parallel file system or shared file system. So the SIF format is a single file format, which is an actual file on disc. If I wanted to give the container to you, Zane, I would copy it to you, I can email it to you, I can FTP it to you, SCP it to you, and I can put it up on a website so you can download it from there. I can put it in Google drive, and give you a link to it. There's no limitation in terms of how you move around these containers. Luckily, the OCI registry format now also has the ability to deal with blob. We can even use an OCI registry to actually move around these containers and share these containers.

Because it's a single file, you download it and you can look in your home directory and see any number of single file based containers in there. As a matter of fact, we make them executable. They're modeled after the ELF binary format in Linux. We've made some changes to it, of course, but each one of these files exists in your directory. If you want to run one of those files, there's no splatting out of tarballs, there's no caching of data, it just runs. 

Zane and I were playing with Singularity just the other day, and Zane, if you don't mind me saying, you actually remarked on how remarkably fast Singularity is when you actually open up a container. It could be a big container, it could be hundreds of gigabytes if you want. And it's instant, the moment you run a Singularity shell pointing at a container, boom, you're sitting in that shell. There's no waiting, there's no loading, there's no dealing with metadata or layers, it’s just instant. Well, the reason why that's happening is because of that image format and we're able to mount up that image format directly and then leverage that image format. If we're doing this over parallel storage, it's incredibly efficient. We can actually do that very, very fast, so you can actually load up an application.

And there's an example of this that I've used to talk about a lot and I'll bring it up here. There have been several national labs in large sites that have basically said, “Okay, we want to run this Python NPI program across the whole system.” If they do it just on their standard parallel file system like Lustre, there were a couple cases that said that it was taking nearly 30 minutes just to start the job, just to load up all of the data, the Python bits, and all the metadata operations. To do that on Lustre is like a distributed denial of service attack on your metadata server: it’s not fun. You have thousands of nodes literally congesting your metadata server to the point where it literally takes 30-ish minutes. Well, if you put that whole application into a Singularity container, and put that Singularity container on the exact same storage that you were launching that Python job from, it went down to six seconds. 30 minutes to six seconds. Massive, massive optimization. And that has to do with that image format. 

The reason I'm telling you this whole story is because that image format is so important and we've approached it very differently than the rest of the community. From that image format, we're able to think about things differently, like cryptographic signatures, signing that data, validating that data, building a trail of providence around that data. Wherever you bring that single file container, there's no services; there's nothing you need to consider; it's all in the file. Just like you sign an RPM, right? If you're going to build an RPM and distribute that for your operating system, and somebody does a DNF or a Yum install of that RPM, you want to have that signed, don't you? Well, why would you want anything different from the containers that you're running on? You want those signed.

So we did that. We've actually had that working for four-ish years now, maybe… it's been a while. That's been a really important piece of it and we can do things like encryption. We can actually encrypt that container and we never splat out the decrypted data. So again, lots of things that we've done different that give them really cool use cases. What if you've now put that on an edge device, that's in an insecure area, right? You can actually protect your data. If somebody gets a hold of that edge device, well, the only thing they're going to see is an encrypted blob. They won't see your models. There are a lot of cool use cases around that. Ian, Cédric, anything I'm missing?

Ian Kaneshiro:

I think you covered it.

Back to SIF [26:43]

Zane Hamilton:

We have two questions from Tron. In his previous question, he was asking not as much  changing the file format or the extension, but actually changing the name, like what we call an Apptainer container.

Ian Kaneshiro:

Oh, you can use any file extension you want, but the actual underlying thing that we're going to use, we're still going to call it SIF. It probably makes sense to still use .sif as a suffix, if you want, or you can use .app or anything really, it's not relevant to the actual execution of the container.

Zane Hamilton:

Thank you.

Gregory Kurtzer:

I think the question was also about the project name, SIF, because it is a sub-project. We're not going to change the name of SIF internally or the subproject of SIF to AIF. 

Ian Kaneshiro:

Yeah, that is correct.

Zane Hamilton:

Very good. Thanks again for the question, John. This one's actually directly for Cédric. What is the most complex internal component or subsystem of Apptainer, in your opinion?

Cédric Clerget:

I think this is regarding the mount system with Singularity v2. We had multiple vulnerabilities which were possible to export and gain privileges, and for v3, it has been worked in a way where mounting is done synchronously. Yeah, there are multiple steps involved to create the container and the amount down inside this container. This has the most complex part.

Cryptography and Encryption [29:08]

Zane Hamilton:

Thank you very much. All right. So Greg touched on being able to cryptographically sign things. I think it's important to understand the why, the how, why does that matter to me? Why would I care if I could encrypt the container? Why would I care about where I encrypt the container? Why is that important or why is it different?

Ian Kaneshiro:

There's two things there, right? There’s the why would you care if you can sign a container? That's maybe the first thing to touch on. So I think it's kind of interesting that we've had so much time with the container community where everyone's been using containers to manage and install applications. But nothing's been signed as a manner of how it's been distributed in general. Whereas when you compare that to packaging systems like RPM, you really wouldn't trust an RPM that wasn't properly signed. It would be a bit suspect to install that on a production system. But that's how we've been working in the container community for a long time. That's a gap that many projects have seen and want to build standards that will allow everyone to use a common signing method.

When Apptainer or Singularity was created and were looking at doing cryptographic signatures, these things weren't in place in a way where we could integrate with those types of projects. At this point, I don't see a clear project in the forefront that everyone is adopting that everyone can align to at this point. What Apptainer has done is implemented a signing method as a part of that single image file, where we can attach a PGP signature or a signature generated by PGP keys into that container file, and use that to validate the contents of the file system that we mount for the container. We can have the ability for someone to sign a container, do a key exchange out-of-band of the container transfer, and then have the person that receives the container verify that it came from the identity that they intended and make sure there wasn't any corruption or man in the middle, essentially.

That's one thing of value and nice about the singular image format. Another thing on the encryption side is it's nice to be able to have the protection of your data while it is in transit and also potentially on other systems that you're executing on. There are use cases where people want to protect things like AI models or machine learning models, things that are kind of in the classification of intellectual property, that they are packaging side of their containers, that's core to their value ads. They want to make sure that that information is protected while it's in transit, and also potentially while it's on other systems that other users are able to access.

What’s Next from Apptainer [32:19]

Zane Hamilton:

That's awesome. Thank you very much. I think the next logical step in this is to ask, “With everything that's gone on and all the different changes that have taken place, moving from Singularity to Apptainer, what's coming next, or what can people expect to see next out of Apptainer?”

Cédric Clerget:

There are steps for running fully unprivileged. Dr. Dave asked them to do some work in this direction. To give a bit of context, Singularity/Apptainer can run unprivileged, but since Linux scale has supported user and space, which allow to create a container environment, the problem is if we want to support the SIF format, we need to extract the image onto the disk and execute the container from there. The idea is to keep the SIF format and run the container from the SIF image directly. Dr. Dave has implemented the next release, I think. We could load SIF images with ‘squashfuse,’ and he also added support for fuse operators to be able to write inside the container into a persistent layer. So, yeah, that's the direction. Actually, we will probably extend that by using the Linux Kernel Library. There was some work in the past and some proof of concept down. It worked pretty well and we should be able to also support encryption fully unprivileged, but it still has some work to do on this role.

Gregory Kurtzer:

Running completely unprivileged while supporting SIF and all of the other features of Singularity and Apptainer is really amazing. Now we have supported unprivileged fully rootless since early v2 days, so, four-ish years, and we’ve done that through the user kernel namespace, but there were a lot of features that were not able to work properly when we ran in that unprivileged mode. Now being able to run in the fully unprivileged mode and still get access to all of those features is remarkable. That is very, very, cool.

Checkpoint Restarts [36:11]

Zane Hamilton:

That is very cool. I think one of the other things that is coming, and I've heard you guys talking about, is checkpoint restarts. Ian, if you want to talk a little bit about what that is.

Ian Kaneshiro:

Yeah. That's an integration with a project that's been around for a little bit in the HPC community. Some sites are very familiar with this project called DMTCP. The idea of this project is to allow checkpointing transparently when you're running your applications. We have created integration with that technology in order to allow that to be injected into containers when they're running, so that you can checkpoint your containerized applications as well. There are a couple caveats to that because it gets in between your application and the shared objects that it links against in order to have hooks into your application and do its checkpointing. So you need to have dynamically linked applications in order to use this, but it does add some really interesting capabilities to systems that need to be able to interrupt jobs.

Zane Hamilton:

That's very cool. I've had several conversations where there's a lot of excitement around that. Being able to do that for a lot of different reasons is very cool. I also see that Tron is heckling Greg. I love it, and welcome more of it. Tron says he'll be encouraged to switch from singularity to Apptainer if Greg can repeat Apptainer, not Singularity, 10 times without mistakes. And he said maybe he'll take five.

Gregory Kurtzer:

Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity, Apptainer not Singularity.

CNCF Projects [38:00

Zane Hamilton:

Wow, thank you. I wanted to encourage you guys. If you have other questions, please start sending them in. I think we've gotten to the end of what we wanted to talk about. We want to make sure we leave time to answer questions. I see that Justin just asked one: “Are there plans to better integrate with CNCF projects like Sigstore and Notary v2?”

Ian Kaneshiro:

There are. Something we didn't talk about the move of Singularity to Apptainer is that the biggest spur for that move was the opportunity to join the Linux Foundation, which puts us in the same project umbrella as many other container-based projects, or container integrity-based projects. A couple listed here are things like Sigstore and Notary v2. Sigstore is a relatively mature product or project, and that's something that we could integrate with today without any technical issues, as far as I can tell from my evaluation. The key there is that you need to have your Singularity containers stored within an OCI registry that supports OCI artifacts. And that's how you actually attach the signatures to your container images. You basically say you want to sign this container at this registry location and it'll attach a signature to that container at that registry location, by adding another artifact to a registry path that's named based off of the container you're intending to sign.

That's something that's very doable and something that we're looking at as well as Notary v2 is something that is basically a project looking to make a better version of what is currently, I think, called Notary. I'm not sure if it's called Docker Notary, but it's a container integrity system that came out of Docker, the company. It hasn't built a lot of traction in terms of the general open source community at this point. That's an initiative to build something very similar to that with I think slightly different properties than what Sigstore provides in terms of your integrity model and how you verify whether something is from a particular entity. They're still in the discussion phase in building out prototypes, and to my knowledge, there hasn't been something that has been set in stone as this is the path we're going to go down and what you need to do in order to integrate with us.

Zane Hamilton:

Great. Thank you very much for the question, Justin. We have another question from George: “Any changes with the move from Singularity to Apptainer?”

Ian Kaneshiro:

I can jump on this one again. The change is just that the application name is different, but we also supply a symlink with the name Singularity, so you can still use your same scripts or commands if you still sometimes mistype Singularity instead of Apptainer, like Greg, and have all of your commands work. There are some changes to some of the default remote endpoints, like we store keys by default inside of OpenPGP’s key server instead of where it was stored before, but really the core use case and usage of Apptainer shouldn't feel any different than what you're familiar with with Singularity. We also have taken effort to do things like migrate user configuration data when you upgrade from Singularity to Apptainer. It should feel seamless, you should have the same settings that you had before and same configurations for things like registry, credentials, and stuff like that when you have a fresh install of Apptainer that upgrades from a Singularity installation.

Where is Apptainer Available [42:11]

Zane Hamilton:

Great. Thank you very much. Do you have any other questions? Something Greg and I had talked about is when Apptainer is going to be available in EPEL.

Gregory Kurtzer:

We should have invited Dr. Dave. Maybe in the future we'll invite Dr. Dave to our webinar. Dr. Dave is a Fedora and EPEL maintainer as well. He actually maintains the package, both Singularity and Apptainer in EPEL. That is actually underway. I believe he was waiting for not the 1.0 release, but I think it was the 1.01 or 1.02, something that just hammers out some of the bugs and guarantees a certain level of stability to demonstrate that everything is working properly, and let it go through some initial testing in the community, and then release it into EPEL. At that point, I think it is going to replace Singularity in EPEL, so if you do a DNF or a Yum install for Singularity, I believe it's actually going to install Apptainer, and Apptainer will also upgrade Singularity.

Compatible with Docker [43:44]

Zane Hamilton:

Excellent. Oh, got one more question. George asked again, “Is Apptainer completely compatible with Docker containers?”

Gregory Kurtzer:

Completely? Well, I’d say from a container perspective, yes. From an image perspective, we are actually leveraging the open containers implementation for downloading and managing the containers and putting them onto disc. So when you do something like a Singularity build from Docker, and you put that into a Singularity container, it's using all those same bits that any OCI compatible runtime would use to pull from those registries. So from that perspective, yes, a hundred percent. But then what we do is two things: we squash those layers down as we convert it to a SIF into just that single SIF layer. That's the first thing that we do that is a little bit different. But also our runtime is a little bit different.

I would say it's not completely compatible, but it's really dang close. You can pretty much use any container that is in Docker hub or any of your OCI based containers. You can use them with Singularity a hundred percent. Now, again, not everything's going to translate a hundred percent. For example, if your Docker container or your OCI container specifies that it needs to run as a particular user for that service, we're not going to support that. We always run as the user that's calling the application. So if GMK is my username and my UID is 501… I just dated myself, didn’t I… um, 1,001, when I actually launch the container and I do a “Who am I?” or an ID, it's going to show that I'm the same user inside the container with the exact same groups, the exact same lineup, everything.

I didn't lose or gain any abilities from a credential access perspective, but what Singularity/Apptainer does do is that it will block any privilege escalations from there. So I will no longer be able to get root once I'm inside that container, little nuances like that. We handle things a little bit differently, probably at least for an HPC or an application-focused use case. You're not going to notice any sort of incompatibilities there, but if you're trying to run services with Apptainer, you might see differences in terms of how they operate, but the container format is close, but again, not exact. Hopefully that answers the question.

Zane Hamilton:

Very good. Thank you. Well, I want to thank you three for showing up to this webcast and also for the work that you've done with Apptainer. I really appreciate it. Thanks for spending the time with us. Cédric, I know it's probably later for you than it is for us, so thank you very much for joining. I also want to point out that we are hiring for Apptainer roles, as well as other positions within our development team for CIQ. Go to our website. There are plenty of opportunities to apply for. We are looking forward to hearing from you guys. Thank you very much, and we will see you again next week.

Gregory Kurtzer:

Closing thought: we've had a few more requests for Apptainer-focused webinars, one of which is around using CI systems to automatically build and manage your containers. So you can be doing CI, automatically building your Apptainer containers, and making them available to HPC and application use cases. We've gotten use cases like that, we've gotten use cases around, “How do we do encryption?”, “How do we do cryptographic signing?”, “Could we demonstrate some of this stuff?” So we've gotten a number of additional webinar requests and demonstration requests. Definitely sign up as Zane said. Like and subscribe to keep track because these additional use cases are coming and I'm super looking forward to those.

Zane Hamilton:

Appreciate it. Thanks again for spending your time with us. We will see you next time.