103: GreyBeards talk scale-out file and cloud data with Molly Presley & Ben Gitenstein, Qumulo

Sponsored by:

Ray has known Molly Presley (@Molly_J_Presley), Head of Global Product Marketing for just about a decade now and we both just met Ben Gitenstein (@Qumulo_Product), VP of Products & Solutions, Qumulo on this podcast. Both Molly and Ben were very knowledgeable about the problems customers have with massive data troves.

Molly has been on our podcast before (with another company, see: GreyBeards talk HPC storage with Molly Rector, CMO & EVP, DDN ). And we have talked with Qumulo before as well (see: GreyBeards talk data-aware, scale-out file systems with Peter Godman, Co-founder & CEO, Qumulo ).

Qumulo has a long history of dealing with customer issues with data center application access to data, usually large data repositories, with billions of small or large files, they have accumulated over time. But recently Qumulo has taken on similar problems in the cloud as well.

Qumulo’s secret has always been to allow researchers to run their applications wherever their data resides. This has led Qumulo’s software defined storage to offer multiple protocol access as well as a completely native, AWS and GCP cloud version of their solution.

That way customers can run Qumulo in their data center or in the cloud and have the same great access to data. Molly mentioned one customer that creates and gathers data using SMB protocol on prem and then, after replication, processes it in the cloud.

Qumulo Shift

Ben mentioned that many competitive storage systems are business model focused. That is they are all about keeping customer data within their solutions so they can charge for capacity. Although Qumulo also charges for capacity, with the new Qumulo Shift service, customer can easily move data off Qumulo and into native cloud storage. Using Shift, customers can free up Qumulo storage space (and cost) for any data that only needs to be accessed as objects.

With Shift, customers can replicate or move on prem or in the cloud Qumulo file data to AWS S3 objects. Once in S3, customers can access it with AWS native applications, other applications that make use of AWS S3 data, or can have that data be accessible around the world.

Qumulo customers can select directories to Shift to an AWS S3 bucket. The Qumulo directory name will be mapped to a S3 bucket name and each file in that directory will be copied to an S3 object in that bucket with the same file name.

At the moment, Qumulo Shift only supports AWS S3. Over time, Qumulo plans to offer support for other public cloud storage targets for Shift.

Shift is based on Qumulo replication services. Qumulo has a number of patents on replication technology that provides for sophisticated monitoring, control and high performance for moving vast amounts of data.

How customers use Shift

One large customer uses Qumulo cloud file services to process seismic data but then makes the results of that analysis available to other clients as S3 objects.

Customers can also take advantage of AWS and other applications that support objects only. For example, AWS SageMaker Machine Learning (ML) processes S3 object data. Qumulo customers could gather training data as files and Shift it to S3 objects for ML training.

Moreover, customers can use Shift to create AWS S3 object backups, archives and DR repositories of Qumulo file data. Ben mentioned DevOps could also use Qumulo Shift via APIs to move file data to S3 objects as part of new application deployment.

Finally, using Shift to copy or move file data to AWS S3, makes it ideal for collaboration by researchers, analysts and just about other entity that needs access to data.

The podcast ran ~26 minutes. Molly has always been easy to talk with and Ben turned out also to be easy to talk with and knew an awful lot about the product and how customers can use it. Keith and I enjoyed our time with Molly and Ben discussing Qumulo and their new Shift service. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ben Gitenstein, VP of Products and Solutions, Qumulo

Ben Gitenstein runs Product at Qumulo. He and his team of product managers and data scientists have conducted nearly 1,000 interviews with storage users and analyzed millions of data points to understand customer needs and the direction of the storage market.

Prior to working at Qumulo, Ben spent five years at Microsoft, where he split his time between Corporate Strategy and Product Planning.

Molly Presley, Head of Global Product Marketing, Qumulo

Molly Presley joined Qumulo in 2018 and leads worldwide product marketing. Molly brings over 15 years of file system and archive technology leadership experience to the role.

Prior to Qumulo, Molly held executive product and marketing leadership roles at Quantum, DataDirect Networks (DDN) and Spectra Logic.

Presley also created the term “Active Archive”, founded the Active Archive Alliance and has served on the Board of the Storage Networking Industry Association (SNIA).

0102 GreyBeards talk big memory data with Charles Fan, CEO & Co-founder, MemVerge

It’s been a couple of months since we last talked with a startup, so the GreyBeards thought it was time. We reached out to Charles Fan (@CharlesFan14), CEO and Co-Founder of MemVerge to find out about their big memory solution or as Charles likes to call it, “software defined (big) memory”. Although neither Matt or I had ever talked with Charles before, he’s been just about everywhere in the storage industry throughout his career.

If you have been following my RayOnStorage blog you will have seen a post (Need memory, Intel’s Optane DC PM to the rescue) last year on Intel’s new Persistent Memory solutions using 3D XPoint, called Optane DC PM (data center, persistent memory) . At the announcement Intel made available a couple of ways customers could use Optane DC PM (PMem).

Optane DC PM primer

Native Optane DC PM access modes include:

  • A Memory Mode, which has Pmem emulating a large volatile memory space and uses a defined ratio of DRAM to PMem as a cache to access the Optane DC PM memory behind it.
  • An Application Direct (AppDirect) Mode which supports two sub-modes: a storage device mode that uses Pmem to emulate a persistent, 4KB block storage device; and a byte addressable, persistent memory address space mode that uses Pmem to emulate a large, non-volatile memory space . AppDirect memory content persists across boots, power failures and other system crashes.

Native PMem modes are selectected in the BIOS and are deployed at Boot time. Optane DC PM on a server can be split up into any of the three modes. And currently with Optane DC PM (Gen 1), a single server can have up to 6TB of DC PM which will go up to 8TB with Optane DC PM Gen 2 coming out later this year.

MemVerge Memory Machine

MemVerge has written a “software defined memory” service called the Memory Machine, that sits above the Intel Optane DC PM in server(s) and provides application access AND data services for PMem. .

Charles likens their Memory Machine to what VMware did for CPU cores, ie. they provide memory virtualization. This, Charles believes will bring on the age of Big Memory applications. He feels that PMem, with Memory Machine on top of it, will eliminate the need for high performance, tier 0 storage. Tier 0 storage is ~$10B market today, which he sees shifting from networked storage to PMem solutions. 

Memory Machine Data Services

One of the data services that the Memory Machine offers is a Pmem snapshot service. PMem thick or thin snapshots can be taken any (infinite) number of times (for thick snapshots storage space availability may limit their number) and can be taken up to once per minute. PMem thin snapshots take little time to accomplish and are very PMem space efficient but thick snapshots are a PMem to PMem copy of data, which will take longer to accomplish and will take double the memory of the original PMem being snapshot.

One significant use case for Pmem snapshots is for checkpoint crash recovery. Charles mentioned many securities and financial analysis firms use KDB as streaming data base service to monitor/analyze market activity and provide automated trading and other market services. These firms are always trying to gain an advantage through speed and reduced latency and as a result have moved their time sensitive processing to use in memory data structures/databases.

However, because checkpointing for crash recovery takes time, they usually checkpoint in memory databases only once a day (after market close) and maintain a log of database transactions on SSD. If there’s a system crash, they reload the last checkpoint and re-play all the transaction logs since that checkpoint to bring their in memory database back to the point of crash. Due to the number of transactions these firms do, this sort of crash recoverys can take hours.

With Memory Machine, these customers can take in memory checkpoints every minute and in the event of a crash, only have to re-play a minutes worth of transaction logs which could be done in no time to get back up

Other environments do similar checkpoint crash recoveries all of which could also take advantage of PMem snapshots to take more frequent checkpoints. Charles mentioned Rendering farms on the podcast but long scientific simulations (HPC) and others use checkpoints for crash recovery.

Another data (or application) service offered by Memory Machine is application cloning. Most in memory applications are single threaded. meaning they can only take advantage of a single CPU core (thread). In order to speed up processing, customers must shard (split up) or copy their database and application onto other servers/CPU/cores to provide more processing power. Memory Machine can use its thick or thin snapshots to clone applications in seconds.

Charles also mentioned that Memory Machine offers PMem dynamic reconfiguration. That is instead of having to make BIOS changes and re-boot server(s) to re-allocate PMem across different applications, Memory Machine is allocated 100% of the PMem at boot time but then, on demand, anytime its operating, operators using MemVerge’s GUI/CLI can carve Pmem up into any number of application memory spaces. That is as application demand for in memory data changes, operations can use the Memory Machine to re-allocate PMem to keep up.

Memory Machine also supports PMem clustering or scaling across servers. With the current 6TB (and soon 8TB) per server PMem limit, some customer applications still run out of memory. Memory Machine is able to cluster or aggregate PMem across up to 32 servers to support a single larger, PMem address space of 192TB (Gen 1) or 256TB (Gen 2) DC PM. The Memory Machine uses an RDMA (RoCE Ethernet or InfiniBand) cluster interconnect which adds ~1 microsecond of overhead to access PMem in another server. This comes with PMem automatic data tiering using DRAM, local (on the server) PMem and remote (across cluster interconnect) PMem.

Charles mentioned another data service provided by Memory Machine is (Synch or Asynch) replication. One use case for replication is to create a Pub-Sub service for market data.

Charles believes that in memory databases and data processing workloads are just starting to become popular these days. Besides KDB and rendering, other data processing such as AI training/inferencing, Reddis applications, and other database systems are able to take advantage of in memory, large data structures to speed up their data processing

MemVerge’s EAP (early access program) opened up recently (5/19/2020). Charles suggested anyone using large, in memory data processing, take a look at what the Memory Machine can do and contact them to sign up.

The podcast runs ~45 minutes. Charles was very articulate as well as knowledgeable about the technology and its applications. He was great to talk tech with. Matt and I had a fun time talking Optane DC PM and Memory Machine functionality/applications with him. Listen to the podcast to learn more.

Charles Fan, CEO & Co-founder, MemVerge

Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was a SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product.

Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO.

Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

0101: Greybeards talk with Howard Marks, Technologist Extraordinary & Plenipotentiary at VAST

As most of you know, Howard Marks (@deepstoragenet), Technologist Extraordinary & Plenipotentiary at VAST Data used to be a Greybeards co-host and is still on our roster as a co-host emeritus. When I started to schedule this podcast, it was going to be our 100th podcast and we wanted to invite Howard and the rest of the co-hosts to be on the call to discuss our podcast. But alas, the 100th Greybeards podcast came and went, before we could get it done. So we decided to refocus this podcast back on VAST Data.

We talked with Howard last year about VAST and some of this podcast covers the same ground (see last year’s podcast with Howard on VAST Data) but I highlighted below different aspects of their product that we also discussed.

For starters, VAST just finalized a recent round of funding, which if I recall, valued them at over $1B USD, or yet another data storage unicorn.

VAST is a scale out, disaggregated, unstructured data platform that takes advantage of the economics of QLC SSD (from Intel) combined with the speed of 3D XPoint storage class memory (Optane SSD, also from Intel) to support customer data. Intel is an investor in VAST.

VAST uses mutliple front end (controller) servers, with one or more HA NVMe drive module(s) connected via a dual infiniband or 100Gbps Ethernet RDMA cluster interconnect. The HA NVMe drive module has two (IO modules) adapter cards, one for each connection that takes IO and data requests and transfers them across a PCIe bus which connects to QLC and Optane SSDs. They also have a Mellanox (another investor) switch on their backend with a (round robin) DNS router to connect hosts to their storage (front-end) servers.

Each backend HA NVMe drive module has 12 1.5TB Optane U.2 SSDs and 44 15.4TB QLC SSDs, for a total of 56 drives. Customer data is first written to Optane and then destaged to QLC SSD.

QLC has the advantage of being 4 bits per cell (for a lower $/GB stored) but it’s endurance or drive writes/day (dw/d)) is significantly worse than TLC. So VAST has had to work to increase QLC endurance in their system.

Natively, QLC offers ~0.2 dw/d when doing random 4K writes. However, if your system does 128KB sequential writes, it offers 4.0 dw/d. VAST destages data from Optane SSDs to QLC in 1MB chunks which both optimizes endurance and reduces garbage collection write amplification within the drive.

Howard mentioned their frontend servers are stateless, i.e., maintain no state information about any IO activity going on. Any IO state information is maintained by their system in Optane SSDs. Each server maintains a work log (like) structure on Optane that describes what they are doing in support of host IO and other activities. That way, if one front end server goes down, another one can access its log and take over its activity.

Metadata is also maintained only on Optane SSDs. Howard called their metadata structure a V-tree (B-tree). VAST mirrors all meta-data and customer data to two Optane SSDs. So if one Optane SSD goes down, its pair can be used to continue operations.

In last years podcast we talked at length about VAST data protection and data reduction capabilities so we won’t discuss these any further here.

However, one thing worth noting is that VAST has a very large RAID (erasure code protection) stripe. Data is written to the QLC SSDs in a VAST designed, locally decodable erasure coding format.

One problem with large stripes is rebuild time. VAST’s locally decodable parity codes help with this but the other thing that helps is distributing rebuild IO activity to all front end servers in the system.

The other problem with large stripe sizes is garbage collection. VAST segregates customer data by “temporariness” based on their best guess. In this way all data in one stripe should have similar lifetimes. When it’s time for stripe garbage collection, having all temporary data allows VAST to jettison the whole stripe (or most of it) rather than having to collect and re-write old stripe data to another new stripe.

VAST came out supporting NFSv3 and S3 object storage protocols, Their next release adds support for SMB 2.2, data-at-rest encryption and snapshotting to an external S3 store. As you may recall SMB is a stateful protocol. In VAST’s home grown, SMB implementation, front end servers can take over SMB transactions from other failed servers, without having to fail the whole transaction and start over again.

VAST uses a fail in place, maintenance policy. That is failed SSDs are not normally replaced in customer deployments, rather blocks, pages, or SSDs are marked as failed and the spare capacity available in the drive enclosure is used to provide space for any needed rebuilt data.

VAST offers a 10 year maintenance option where the customer keeps the same storage for 10 full years. That way customers don’t have to migrate data from one system to another until their 10 years are up.

The podcast runs a little under 44 minutes. Howard and I can talk forever. He is always a pleasure to talk with as well as extremely knowledgeable about (VAST) storage and other industry solutions.  The co-hosts and I had a great time talking with him again. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Howard Marks, Technologist Extraordinary and Plenipotentiary, VAST Data, Inc.

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

0100: GreyBeards talk with Colin Gallagher, VP Dig. Infra. Prod. Mkt. @ Hitachi Vantara

Sponsored By:

We have known Colin Gallagher (@worldc3), VP, Digital Infrastructure Product Marketing at Hitachi Vantara, for a long time and he has always been an all around smart storage guy. Colin’s team at Hitachi Vantara are bringing out a brand new, midrange storage system and we thought it would be a good time to catch up with him and learn about it.

The new Hitachi Vantara VSP E990 Storage System is an all NVMe SSD array for medium sized enterprises that need predictable, high IOPS-low latency performance with enterprise class functionality and world class reliability/availability. We asked Colin why they needed all NVMe levels of performance. Colin replied that many of these data centers are starting to use advanced HPC, AI, and data analytics applications together with their standard Oracle, SAP and Microsoft solutions. These combined workloads have an acute need for predictable, high end performance and enterprise class functionality in order to work well.

The VSP E99O comes from a long heritage of enterprise storage at Hitachi, most recently embodied in the Hitachi VSP 5000. In fact, the VSP E990 uses the same storage OS as the VSP 5000, with changes made to streamline it for use with higher performing, all NVMe storage on a dual controller architecture.

This means all the advanced storage functionality of the high end enterprise VSP 5000 are available on the VSP E990 midrange system, minus some items not pertinent to midrange such as mainframe attach.

Many of the software changes involved cache and cache management. In the VSP E990, cache is now automatically shared and distributed across controllers reducing the performance impact of mirroring. Further, Hitachi has added more cores and higher performing processors as well. As a result, the VSP E990 all NVMe array can provide up to 5.8M IOPS and a best in any networked storage system, IO response time as low as 64 µsec. Colin also mentioned that they have reduced flash drive rebuild times by 80%.

The VSP E990 comes in a 4U base configuration and can offer from ~6TB to up to over 6PB of virtual capacity with drive expansion. In 8U plus controller (on the audio, it was incorrectly stated as 6U, The Eds.), the VSP E990 provides slots for up to 96 NVMe SSDs. Just like all VSP storage, the VSP E990 also offers the Hitachi 100% Data Availability Guarantee, the world’s oldest. Further, the VSP E990 supports 6-9s (99.9999%) reliability.

In addition the VSP E990 also supports Hitachi Adaptive Data Reduction, which compresses and deduplicates data to increase virtual capacity and reduce physical footprint. In the VSP E990, Adaptive Data Reduction uses AI to determine the best time to deduplicate data while at the same time optimizing host IO performance and effective storage capacity.

Hitachi Ops Center

During the last year or so Hitachi Vantara introduced its new Hitachi Ops Center solution to better administer and manage storage and other digital infrastructure. Ops Center now comes with 4 components: Administrator, Protector (copy data management), Automator and Analyzer.

  • Administrator supplies an element manager for VSP, other storage, and digital infrastructure in the data center.
  • Protector provides enterprise class, copy data management to protect, migrate, and archive VSP data storage.
  • Analyzer supports AI analysis of the data center’s storage operations to monitor SLAs, troubleshoot shoot problems, and improve storage performance as well as 3rd party compute, network and storage.
  • Automator supplies a series of templates and services to automate mundane, manual storage and other digital infrastructure tasks required to configure, operate and manage these systems in the data center. Automator provides a number of templates which customers can tailor to automate infrastructure operations such as provisioning an ESXi data store. The templates together with Automator services automatically carry out all the OS, fabric and storage/digital infrastructure tasks and activities required to perform these functions.

Hitachi EverFlex consumption models

Hitachi Vantara is also introducing EverFlex, a new series of consumption models, that any customer can use to provide more financial flexibility in their data center digital infrastructure acquisitions, deployments, and management.

EverFlex offers customers the option to purchase, lease or buy on a pay-as-you-go, cloud-like basis any Hitachi Vantara storage or digital infrastructure. Colin mentioned there were two ways that pay-as-you-go can operate,

  1. Customers pay on pure capacity over time basis. Here the customer would contract for a certain capacity and Hitachi Vantara would install storage/digital infrastructure capacity and would bill them monthly for it.
  2. Customers pay on an SLA over time basis. Here they would contract for a specific SLA, such as IOPS or other performance characteristic and Hitachi Vantara would install and maintain any storage/digital infrastructure to meet that SLA and bill them monthly for it.

Colin said that all Hitachi, world-class services are also now available to be purchased under EverFlex.

The podcast ran ~24 minutes. Colin has always been easy to talk with and very knowledgeable about storage. We were very impressed with the performance and innovation in the VSP E990 as well as Ops Center and EverFlex. Keith and I had fun discussing these solutions with Colin. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Colin Gallagher, VP Digital Infrastructure Product Marketing at Hitachi Vantara

Colin is Vice President for Digital Infrastructure Product Marketing at Hitachi Vantara where he leads product marketing for storage systems, storage software, and converged/hyper-converged solutions.

Over his 25-year career he has lead marketing and product management team teams at several major storage companies. Colin has a passion for telling compelling stories about technical products that help customers solve both business and personal pain – and he enjoys the challenge of telling them in creative ways.

He holds a bachelor’s degree from Georgetown University and an MBA from Northeastern University. Colin tries to put as many miles on his bike as possible, “hangs out” on twitter as @worldc3, and (unlike the GreyBeards) is team Oxford comma.

099: GreyBeards talk Folding@Home with Mike Harsch, a longtime enthusiast

Microscopic picture of Coronavirus

Mike Harsch (@harschness) is a personal friend, a computer enthusiast with a particular and enduring interest in distributed systems and GPU computing. MIke’s been a longtime user and proponent of Folding@Home, a distributed system focused on protein dynamics that anyone can download and run on their personal computer(s) or gaming devices.

We started the discussion on the history of distributed processing using home computers. Mike apparently first ran accross these systems in college and was using one in his college dorm room, back in 1997. At the time there was a system called, distributed.net, which was attempting to crack the (RC5-56[bit]) encryption keys used for computer security and offered a $10K prize for solving it. That was solved in 250 days (source: wikipedia article on distributed.net). Distributed.net is still up and working but since then they have moved to ever larger keys.

Next came Seti@Home which was a 2nd gen distributed system. SETI @Home sent out slices of recorded radio telescope spectrum and tasked people’s computers (during screen saving) to analyze that spectrum for alien signals. Seti@Home painted a nice image of the analysis. Seti@Home also used some gamification, where users gained points for analyzing spectrum. Over time they had something like a leader board tracking the top users. Recently, Seti@Home shut down their distributed system and changed their focus to analyze all the results they received from their users. I was a SETI@Home user for a while.

Folding@Home

Folding@Home is 3rd generation distributed computing solution built along the same lines but rather than searching for aliens, with Folding@Home you are running a simulation of what a protein molecule does over time. Mike mentioned that a typical Folding@Home work unit is to simulate a few nanoseconds in the life of a protein and this could take an hour or more on a x86 class multi-core CPU (with less time on GPUs).

Mike mentioned that there was a recent Ask Me Anything (AMA) event on Reddit with the team on Folding@Home answering questions. And on March 15th, the team at Folding@Home clarified how they are helping to solve the COVID-19 pandemic.

Keith has used Folding@Home in the past. And my son was an early user as well.

What Folding@Home does

Fold@Home uses idle CPU or GPU time on home gaming platforms/computers/servers or data center servers. Initially, in October of 2000, it was used to understand protein folding. But nowadays it’s gone beyond just folding, to simulate the life of a protein.

Prior to their turn to concentrate on COVID-19, they usually had ~30K active users, supplying ~100PFlops (100 quintillian x86 double precision floating point operations per second) of compute power. 

You get points for doing Folding@Home work. When Folding@Home was launched it was designed to use a single CPU/single core. Sometime in 2006, they released a SMP version of the code ,which could use multi-cores. Later they released a multi-threaded version which worked better on multi-core CPUs. And within the last few years, they have released a GPU support that could take advantage of the massive numbers of GPU cores available today.

Mike said that Folding@Home work unit GPU is generally 10 to 100X faster than what can be done with multi-core/multi-threaded CPU systems. 

Around Feb 27, Folding@Home announced they were going to focus all their efforts on understanding how to combat the COVID-19 coronavirus. After the announcement, their user count went through the roof, to now ~400K active users/day. This led to throttling requests for work and delays in handling responses. Over the ensuing weeks, (as of 3/18), they seem to have added enough resources to support their current levels of users.

The architecture of the old Folding@Home system was 2 tiered, they had a set of Folding@Home front-end servers that handled web traffic and distributed the work requests/responses to a set of backend servers that supplied work requests to users and combined work results. In their latest rush they seemed to have had to add servers, networking and storage to both tiers.

Sometime around March 25th, Folding@Home became the firsth and only ExaFlop supercomputer, achieving 1.56 (x86) ExaFlops (10**18 FLOPS, source: wikipedia article on Folding@Home) and have over 1 million active computing devices (GPUs & CPUs) in their network (see: Greg Bowwan’s status tweet).

Deploying Folding@Home on your systems

Folding@Home operates on any number of endpoint devices OSs and gaming console -systems. It comes in two software packages, one is the software that logs into the Folding@Home server to gather the next slice of work unit to perform and the other is the one that does the simulation work. They have an option to paint a picture of what is happening but most disable this feature to devote 100% of any idle CPU/GPU resources to the simulation. They also have a support forum, if you have any questions or need assistance in deploying their software.

Keith mentioned that some gal at VMware asked VMware users to devote their home server CPUs/GPUs to the project. I checked their website and they have a vSphere appliance (FLING) that will run Folding@Home and will register itself as joining the VMware team. Mike mentioned that GitHub (announced on Twitter) was going to supply up to 60K CPU core hours a day to the project. They recently reported that they are shifting work units from understanding COVID-19 to screening compounds for therapeutic potential against the coronavirus.

The world needs you to help solve the COVID-19 pandemic. So join up with Folding@Home to do your part. Downloading the software and installing it on a Mac was easy. Just don’t forget to reboot afterwards and then run FAHcontrol and FAHviewer in “Applications/Folding@home” folder to see what’s going on.

The podcast runs a little under 40 minutes. Mike was very knowledgeable about the IT side of Folding@Home, but was less knowledgeable about the biological side of what they are doing.  Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike Harsch, a computer

Mike is a long time computer enthusiast with particular interests in distributed systems and GPU computing.  He lives in CO and has a basement full of (GPUs &) computers.

Mike and I have co-coached a local high school, FTC robotics team for the last 4 years. And Mike has been involved with FTC robotics for much longer than that.