Published 2025-01-06.
Last modified 2026-02-01.
Time to read: 17 minutes.
git collection.
- Git Large File System Overview
- How Many Copies Of Large Files on Git LFS Clients?
- Git LFS Client Installation
- Git LFS Server URLs
- Git-ls-files, Wildmatch Patterns and Permutation Scripts
- Git LFS Tracking, Migration and Un-Migration
- Git LFS Client Configuration & Commands
- Working With Git LFS
Instructions for typing along are given for Ubuntu and WSL/Ubuntu. If you have a Mac, the compiled Go programs provided on GitHub should install easily, and most of the textual information should be helpful.
Overview
The Git Large File System (LFS) is a way of extending Git to allow the versioning of large files, such as audio samples, videos, data, and graphics. Without Git LFS, the largest file that can be committed by commercial Git providers is 100 MB.
Git LFS extends Git repositories. Without a Git repository, Git LFS cannot be used.
Git LFS speeds up common operations like cloning projects with large files and fetching large files. This is because when an Git repository that has been enhanced with LFS is cloned, the only data that is downloaded are small pointers, not the large files themselves. The files are fetched on demand and efficiently managed.
I call Git without Git LFS “Plain Old Git”.
Network Bandwidth
Moving large files around in bulk requires a lot of network bandwidth. Unless you have a solid high-speed network setup, Git LFS is not going to provide significant benefit and, in fact, might actually provide lower productivity and be a source of unwanted aggravation.
My apartment utilizes fiber optic internet service, and I have Ethernet and Wi-Fi 6 coverage for mobile devices. Git LFS has worked well for me with this setup.
History
In 2015, 11 years ago, Atlassian started co-developing Git LFS for their BitBucket product along with GitHub and GitLab.
Be sure to check the publication date of information you find on the interwebs regarding Git LFS. There was a huge flood of information when it was first announced. Version 1.0.0 was released on 2015-10-01.
Git LFS started to gain traction in 2017. In the early days, the requirement for specialized Git LFS servers was a problem because those servers were scarce, and those that were available had serious issues and limitations.
As recently as 2021, Git LFS was not ready for prime time.
In 2025, we have many servers to choose from, and they are surprisingly lightweight processes. The technology has matured considerably in recent years, however after examining them closely, the only implementation that follows proper software standards (such as proper error handling) is the GitLab implementation.
Do not believe any advice unless it was recently written. 'Facts' that were cited in the early days have probably changed.
My purpose in writing these articles is to provide current information and advice, backed up with solid references and working code.
Licensing
Git and Git LFS are two separate projects. The Git LFS open-source project is hosted at GitHub and has the MIT license. In contrast, Plain Old Git has the more restrictive GNU LGPL v2.1 license.
The more permissive licensing for Git LFS means that a second independent
project to provide a programmable
interface to Git LFS is unnecessary from a legal standpoint.
There is no need for the equivalent of Plain Old Git’s
libgit2.
Phew!
... now we just need an API facade and some language bindings.
Distributed System Components
Like Plain Old Git, Git LFS requires a client and a server. When setting up a Git server with Git LFS, you actually configure two servers (a Plain Old Git server and an LFS server). Every Git user also needs their Plain Old Git client to be enhanced with a Git LFS extension.
– Run Git LFS server on your laptop –
– Store large file versions wherever –
From a Git LFS user’s point of view, all of thee files stored within a Git repository are versioned, regardless of whether the files are big or small. However, the Plain Old Git server manages small files, while the Git LFS server manages large files. Within the Plain Old Git database, only the pointers to large files on the Git LFS server are versioned, not the contents of the files. It is the responsibility of the Git LFS server to maintain version history for large files.
Git LFS works by using a "smudge" filter to look up the large file contents based on the pointer file and a "clean" filter to create a new version of the pointer file when the large file’s contents change. It also uses a pre-push hook to upload the large file contents to the Git LFS server whenever a commit containing a large new file version is about to be pushed to the corresponding Git server.
git-lfs man pageGateway, Not a Proxy
Standard Git LFS servers (e.g., GitLab, Artifactory, Bitbucket, or custom
implementations like the git-lfs-s3 Ruby gem) act as a gateway
that does not process or stream any file data. The Git LFS standard specifies
how to use presigned URLs to let clients upload and download directly to and
from storage backends like S3. Presigned URLs are temporary, signed
S3-compatible links for secure and private transmission.
This architecturally separates metadata handling from data transfer, which greatly reduces the server load, which means almost anything is able to become a server. This flexibilty and simplicity reduces cost and latency. Metadata is used by the batch API for pointers and authentication.
Request Flow
In a typical S3-backed Git LFS setup:
- Git LFS client requests upload/download via LFS batch API to the platform’s LFS server.
- Git LFS server authenticates and generates presigned S3 URL.
- Git LFS client transfers data directly to/from S3 using the URL, without any data flowing through the LFS server.
- Git server updates metadata (pointers in Git repo).
This is efficient and scalable, unlike a true proxy (e.g., the Cloudflare
Worker-based git-lfs-s3-proxy, which does relay data).
Even if you use SSH for standard Git operations, Git LFS almost always issues its own separate requests over HTTPS. This means your network must allow HTTPS traffic to the LFS server even if Git is working over SSH.
Git LFS can be computationally lightweight
My 6-year-old laptop ran a custom Git LFS server on WSL / Ubuntu on Windows 10 without any problem. I use Git LFS on my Songs projects, which store recorded music video projects.
Large assets can be in data storage that is physically remote from the Git LFS Server.
Grok Told Me This
I had a good session with Grok.
Here is the typical handshake / protocol flow for Git LFS when using a standard implementation
(e.g., GitHub, GitLab, Bitbucket, Gitea, or a custom LFS server like git-lfs-s3)
backed by an S3-compatible storage (AWS S3, MinIO, R2, etc.).
This flow uses the modern Batch API (introduced in Git LFS v1.1+ ~2016 and now universal). The LFS server never streams the large binary data itself — it only acts as an authorization & metadata control plane, issuing presigned URLs for direct client ↔ S3 communication.
Download Flow (git pull / fetch / clone — getting objects to your laptop)
-
Git client detects LFS pointers During
git fetch,pull, orclone, Git sees pointer files in the repository (small text files withversion,oid= SHA256 hash, andsize). -
Git invokes git-lfs client (smudge / filter process or explicit
git lfs fetch) → git-lfs collects all missing object IDs (SHA256 oids) that need to be downloaded. - git-lfs client → LFS server: POST /objects/batch (the key “handshake” request)
- HTTP POST to the LFS endpoint (usually
/info/lfs/objects/batchor similar) - Body: JSON with
"operation": "download", list of object specs ({oid, size}), and transfer type hints - Headers:
Accept: application/vnd.git-lfs+json - Authentication: Usually Bearer token / Basic auth / SSH key forwarded from Git credentials (same as git push/pull)
- HTTP POST to the LFS endpoint (usually
- LFS server authenticates the request
- Verifies user/repo access (via Git server integration)
- Checks which objects already exist in storage (by oid)
- LFS server → git-lfs client: 200 OK Batch response
- JSON response with
"transfer": "basic"(or other) - For each requested object:
{ "oid": "...", "size": 123456789, "actions": { "download": { "href": "https://s3.amazonaws.com/my-bucket/.../sha256:...?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Date=...&X-Amz-Expires=900&X-Amz-Signature=...", "header": { "Accept": "application/octet-stream" } // optional } } }→ This is the presigned GET URL (usually expires in 5–60 minutes)
- JSON response with
- git-lfs client → S3-compatible backend: HTTP GET presigned URL (direct!)
- Client downloads the raw binary data directly from S3
- No data flows through the LFS server
- Uses plain HTTP GET (with query params for signature validation)
- Progress reporting shown in terminal
- git-lfs client stores object locally
- Saves to
.git/lfs/objects/xx/xx/...(content-addressable) - Verifies SHA256 matches oid
- Git smudge filter replaces pointer with real file content during checkout
- Saves to
Upload Flow (git push — sending new large files from your laptop)
The sequence is very similar but reversed:
-
git-lfs client detects new/changed LFS objects during
git push(after Git has prepared the commit locally) - git-lfs client → LFS server: POST /objects/batch
- Body: JSON with
"operation": "upload", list of new objects
- Body: JSON with
- LFS server authenticates + checks
- May check if object already exists (skip upload if yes — deduplication by oid)
- LFS server → git-lfs client: 200 OK Batch response
- For each object:
{ "actions": { "upload": { "href": "https://s3.amazonaws.com/my-bucket/...?...&X-Amz-Algorithm=...&X-Amz-Signature=...", "header": { "Content-Type": "application/octet-stream", "x-amz-meta-foo": "bar" // optional custom headers } } } }→ Presigned PUT URL
- For each object:
- git-lfs client → S3: HTTP PUT presigned URL (direct upload)
- Streams the binary file directly from laptop to S3
- Sends any required headers (e.g., Content-Type)
- S3 validates signature and stores object by oid path
- git-lfs client → LFS server: POST /objects (optional verify step in some implementations)
- Tells server “upload completed successfully” (used by some servers for tracking/logging)
- git push completes
- Only small pointer files are pushed through the Git protocol
- Large data transfer happened directly laptop ↔ S3
Summary Table – Who Talks to Whom
| Step | Participant | Talks to | Transfers Data? | Protocol/Method |
|---|---|---|---|---|
| 1–2 | Git client + git-lfs | — (local detection) | No | Local |
| 3–5 (handshake) | git-lfs client | LFS server (/batch) | No (metadata) | HTTP POST JSON |
| 6 (data movement) | git-lfs client | S3-compatible storage | Yes (big!) | HTTP GET/PUT + presigned |
| 7 | git-lfs client | LFS server (optional verify) | No | HTTP POST |
Key architectural beauty: The LFS server handles authentication, authorization, deduplication, and URL signing — but zero bytes of actual large file data ever touch the LFS server. This is why it’s extremely scalable and cheap when backed by object storage like S3.
Motivational Differences
There are motivational differences between how one might use Plain Old Git versus Git LFS.
Differing Origin Stories
Git was created by Linus Torvalds, a power user who performed a lot of demanding code-related administrative tasks. In contrast, the Git LFS project was cooperatively initiated by large-scale commercial vendors.
- Git LFS is the vendors’ project
- To solve their problems
- And fulfill demands made by their customers.
Git LFS users outside that use case, like me, have to figure many things out for themselves. The only way to do that properly is to experiment extensively, carefully read a lot of documentation, and examine source code.
Even though a lot of documentation has been written about Git, the Git LFS documentation was written by and for commercial vendors. I have tried to fill in gaps with this collection of articles.
Do Not Rush Into Git LFS
Getting Git LFS to work can be fraught with issues. Time and patience are required to achieve an acceptable working system for particular circumstances. I hope that readers will benefit from the time I spent writing these articles.
It is easier and less risky to create a new Git repository that uses Git LFS right away than it is to migrate an existing Git repository to use Git LFS and maintain the structure of the commits.
By default, Git LFS rewrites the Git repository history during the migration process; this preserves the structure while reducing the size of the Git repository. The repository is smaller after the migration rewrites the Git history because large files are moved from the Git repository to the associated Git LFS repository.
If a project requires a carefully groomed commit graph, the Git history must be rewritten. Rewriting the Git history requires everyone to re-clone the Git repository. If many people share a Git repository and the history is rewritten, chaos can result because some people will invariably continue to work on the now-obsolete old Git repository. The problem users are the ones who do not read memos. They will lose their work unless a recovery procedure is followed.
If maintaining the Git commit ordering is not important to you and your organization, then you will be happy to know that this incantation:
$ git lfs migrate import --no-rewrite
... preserves compatibility for all your other users. However, the price of this compatibility is that after the import, any copies of large files that were in the Git repository will remain there. If those files were very large, the repository would remain bloated for all time.
So you must choose one of the following when enhancing a Git repository with Git LFS (see Git LFS Tracking, Migration and Un-Migration):
-
Benefit: smaller Git repository and consistent commit ordering.
Risk: a potential for unhappy users who lost work while the Git upgrade was in process. -
Benefit: Unlikely anyone will lose work during the upgrade.
Potential issue: The Git repository forever remains bloated with large files that no longer have a purpose.
Wait until you are familiar with Git LFS before attempting to convert any repositories that you care about to Git LFS. Create repositories to practice on, as shown in these articles.
It is likely that you will encounter problems trying to get things to work. Hopefully, the solutions that I provide will help you solve them. Learn about the problems you will likely encounter and practice the solutions on the practice repositories before you try enhancing an important Git repository with LFS.
Implementations
Git is a distributed versioning system, and so is Git LFS. Pricing for prepackaged LFS storage is unreasonably high from BitBucket, GitHub, and GitLab. Running a local Git LFS server means you can store large assets wherever makes the most sense, including local storage or on an S3 server.
With few exceptions, Git LFS servers from BitBucket, GitHub, and GitLab lack the ability to authenticate against other storage services. If you want to pay the lowest price possible for storage but want to host on one of the aforementioned Git providers, you will need to run your own instance of a Git LFS server. Happily, Git LFS servers are lightweight processes that do not generate much network traffic, so you could run one on your laptop or a small office server.
For example, you can point a local Git LFS server at large files on AWS S3 or S3 clones like DigitalOcean Spaces or Backblaze B2. You won't incur any extra costs from Git providers like GitHub or BitBucket for this, and this configuration is easy to set up.
Many LFS implementations exist; however, as is often the case with software, dead projects live on as zombies for many years. The following LFS implementations were the most interesting to me.
APIs
The Git LFS Batch API is documented here. It is implemented by most vendors and Git LFS storage solutions as the default HTTP-based Git LFS object transfer protocol.
In contrast, the SSH-based Git LFS object transfer protocol, labeled the "SSH protocol proposal" in the linked page, was introduced with Git LFS v3.0.0. This protocol is only offered by a few vendors today, such as GitLab, who added it in their v17.2 release. I believe that GitHub and BitBucket also offer Git LFS over SSH.
LFS Server Summary
Prices are shown in the next section.
I carefully examined the source code of every open source Git LFS server mentioned in this article. The only one that had proper error handling and other standard quality measures was the GitLab implementation. All the others only had “happy path” coding. The commercial products (GitHub, JFrog and Sonatype) are not open source and so I was unable to examine the source code.
- Null servers only use locally accessible content, such as might be found on a server in your local area network. This is the simplest setup, and if that server is regularly maintained (including backups), this option can provide good performance at no extra cost and subject to whatever security regime you decide to implement.
- BitBucket has a tiny free storage offering, so small as to be almost useless (1 GB per GitHub user).
- Git LFS S3 Proxy is a Cloudflare Pages site that acts as a Git LFS server backed by any S3-compatible service. This project is different from all others mentioned. It is extremely simple and maintainable, uses Cloudflare's inexpensive global edge network, and is dramatically cheaper than GitHub and GitLab LFS storage. From a security point of view, this architecture is the most insecure of all. Every request exposes full credentials. There is a very high security risk if the URL is logged, shared, copied to a clipboard, appears in git reflog, CI logs, browser history, etc. This project is a brilliant hack, but exposes the user to unnecessary risk.
- GitHub also has a tiny free storage offering, so small as to be almost useless (1 GB per GitHub user).
- GitLab: All projects on GitLab.com have 10 GiB of free storage for their Git repository and Large File Storage (LFS). While much more generous than GitHub's free offering, this is still too small to be useful for many projects. GitLab’s storage pricing is crazy expensive. This appears to be the best available open-source Git LFS implementation.
- JFrog Artifactory is a commercial server that provides Git LFS support, and many other features. Unless you are already a JFrog customer, it would not make sense to select this product for Git LFS.
-
Gitbucketis a F/OSS project. However, I would be reluctant to rely on anything written in Scala for production. Giftlessis a pluggable F/OSS Git LFS server. It is written in Python and claims to be is easy to extend. In particular, it supports local storage, and storage on AWS S3. Unfortunately, it is an example of happy path coding and is incomplete.- LFS Test Server is an example server that implements the Git LFS API. The product notes say that it is “intended to be used for testing the Git LFS client and is not in a production-ready state.” However, if run on an internal network, or on a laptop, this might be a viable option. LFS Test Server is written in Go, with pre-compiled binaries available for Mac, Windows, Linux, and FreeBSD. This is a reasonable starting point for a custom Git LFS server, but lacks proper error handling.
-
Rudolfsclaims to be a high-performance, caching, F/OSS Git LFS server with an AWS S3 (that does not work) and local storage back-end. The does not have proper error checking and only considers the happy path. I do not recommend using this project. - Sonatype Nexus Repository is a commercial server that provides Git LFS support, and many other features. Only supports the Git LFS batch API. It does not offer an online storage option; users should self-host. Unless you are already a Sonatype customer, it would not make sense to select this product for Git LFS.
Storage Pricing
There is a huge disparity in pricing between the various S3-compatible storage providers. Wasabi, at $7/TB/month, is the cheapest, while GitLab, at $6000/TB/month, is the most expensive. I do not believe that GitLab’s storage is 857 times better than Wasabi’s.
Data egress fees can become substantial costs. In fact, for large and active assets, data egress fees can be much greater than storage costs. GitHub has the highest data egress fees, 33 times more than the next most expensive provider.
When data egress fees are charged, there is no limit to the potential financial liability
Data egress fees do not apply from a Git provider (like GitHub) when storage is provided by a separate storage provider (like Wasabi).
In the following table, data egress fees for providers that do not charge for data egress are shown as $0.
| Storage | Normalized Storage (TB/month) | Egress ($/GB) | Comment | |
|---|---|---|---|---|
| AWS | $0.023 GB/month | $26 | $0.090 | |
| Azure | $0.15 GB/month | $15 | $0.087 | Premium pay-as-you-go pricing shown; other charges apply. Very complex pricing. |
| Backblaze | $6 TB/month | $6 | $0.010 | Minimum charge: $6/month. |
| Bitbucket | $0.100 GB/month | $100 | ? | Incomplete information shown online. |
| Cloudflare R2 | $0.015 GB/month | $15 | $0 | Very complex pricing. 10 GB free storage, unlimited bandwidth to write up to 1 million objects and read up to 10 million objects. |
| DigitalOcean Spaces | $0.02 GB/month | $20 | $0.010 | Minimum charge: $5/month |
| GitHub | $5/data pack/month | $100 | $100.000 | |
| GitLab | $5/month: 10 GB storage and 20 GB egress | $6000 | $3.000 | Other charges apply, for example, $29 per user/month |
| Google Cloud | $23 | $0.110 | Very complex pricing. Other charges apply. | |
| Linode | $5/month: 250GB storage, 1 TB egress | $20 | $0.005 | The first 1 TB egress is included |
| Scaleway | $0.015 GB/month | $15 | $0.010 | 75 GB/month egress included |
| Wasabi | $7 | $0 | Minimum charge: $6.99/month. |
Use the Latest Version of Git
At first, I had a really difficult time making Git LFS work. A major contributing factor was that my versions of Git on the client and server were both old, and the server version was older than the versions on the clients. Once I sorted that out, things got much easier.
As of 2025-12-08, the latest stable version of Git was v2.52.0.
You can check the version of Git on your clients and the Git LFS server by running the following on each computer:
$ git -v git version 2.47.1
Git Releases
I asked ChatGPT to summarize the enhancements to Git made since v2.43 was released (November 20, 2023) that might affect Git LFS. The response has been edited for readability and relevance.
Good question. There have indeed been multiple changes in Git since version 2.43.0 that can affect Git LFS usage — either directly or indirectly. Below is a summary of the most relevant changes (core-Git plus Git LFS itself) since 2.43.0.
✅ Git-Side Changes (since 2.43.0) that can impact Git LFS workflows
-
New “reftable” reference backend (Git 2.45.0)
- Git 2.45 introduced experimental support for a new reference-storage backend called “reftable” as an alternative to the traditional “files” (loose refs + packed-refs) backend. ([The GitHub Blog][1])
- That means repositories (including those using LFS) can now use
git init --ref-format=reftableorgit clone --ref-format=reftableto get potentially more scalable reference storage — especially useful for repos with many branches/tags. ([about.gitlab.com][2]) - For existing repos, this may affect how references (branches/tags) are stored — which indirectly affects any LFS-backed repository.
-
Performance improvements for pack generation (Git 2.44.0)
- Git 2.44 added “faster pack generation with multi-pack reuse,” which improves performance of operations like push/pull. ([Mike Slinn][3])
- This can speed up repository operations even when there are many large files tracked by LFS (since pack-generation affects object transfer and repository maintenance).
-
Reversion of a change to
attr.treedefault behaviour (Git 2.46.0)- In 2.43.0, Git changed how attribute lookup worked for bare repositories (using HEAD tree by default for
.gitattributesin such cases). That had implications for LFS, since.gitattributesoften defines which files are tracked by LFS. ([about.gitlab.com][4]) - In 2.46.0 the change was partially reverted because it caused performance regressions for clone/pull/diff with large or deep trees. ([about.gitlab.com][4])
- In practice, this reversion helps avoid unexpected attribute-resolution issues in bare repositories, which might otherwise have disrupted LFS tracking.
- In 2.43.0, Git changed how attribute lookup worked for bare repositories (using HEAD tree by default for
-
Build-system, memory-leak, and internal quality improvements (Git 2.47 / 2.48)
- Starting around 2.47 / 2.48, Git migrated to a new build system (the Meson build system), improving maintainability across platforms. ([about.gitlab.com][5])
- In 2.48, Git was declared “memory-leak-free” (as exercised by the test suite). ([about.gitlab.com][5])
- While not LFS-specific, these improvements reduce risk of subtle memory or behavior bugs when working with large repos — which is often the case when LFS is in use.
-
Safety/hardening changes that affected LFS workflows in Git for Windows 2.45.1
- According to the release notes for Git for Windows 2.45.1, a “defense-in-depth” change caused a regression when cloning repos enabled with Git LFS: clones would fail because an active
post-checkouthook (installed bygit lfs install) would be disallowed by default. ([Google Groups][6]) - The workaround was to run
git lfs pullafter clone, or adjust environment to disable the protection (GIT_CLONE_PROTECTION_ACTIVE=false). ([GitHub][7]) - This is a concrete example of a Git core change interfering with LFS usage — something to watch out for if you rely on hooks or complex clone workflows.
- According to the release notes for Git for Windows 2.45.1, a “defense-in-depth” change caused a regression when cloning repos enabled with Git LFS: clones would fail because an active
🛠 Git LFS–Side Updates (since Git 2.43.0) relevant to newer Git versions
Although the core of LFS is separate from Git, the combination of updated Git + updated LFS can produce better behaviour. Some relevant LFS-project changes:
- LFS v3.6.0 added support for “multi-stage authentication with Git credential helpers” (when used with Git ≥ 2.46.0) and support for relative worktree paths (when Git ≥ 2.48.0). It also added a new “object transfer batch size” config option, improved Windows path handling at install time, more POSIX-compliant hook scripts, and performance improvements in contexts like sparse checkouts, partial clones, and Git remotes with many tags. ([Mike Slinn][3])
- LFS v3.7.0 (the latest major LFS release as of 2025) introduced several enhancements: options to force objects to be re-downloaded, output object URLs + HTTP metadata as JSON (useful for automation), a configurable in-memory cache of file-path pattern matches (speeding up LFS-migration of large repos), better compatibility on Windows (same
.netrchandling as Git and curl), tolerant retry logic after HTTP 429 responses, support for symlinks to Git’s object storage during LFS migrations, and fewer TLS-verification issues on macOS with custom CA certificates. ([GitHub][8])
⚠️ Known Issues / Gotchas because of Git ↔ LFS Interaction
- The safety hardening in Git 2.45.1 for Windows caused some Git-LFS workflows to fail (as noted above) — clones would succeed but checkout would fail if a
post-checkouthook was present. The official remedy was to rungit lfs pull. ([Google Groups][6]) - Switching reference-backend to “reftable” might have unknown side-effects for tooling or hosting services that expect the “files” backend — since this is still experimental/optional (though increasingly recommended) with Git 2.45+.
🎯 What This Means for You (as a Git + LFS User)
- If you maintain a repo with many branches/tags (or expect to scale), consider experimenting with the “reftable” backend — especially if you also use LFS. It can potentially improve performance and ref-storage scalability.
- Upgrading to recent Git (≥ 2.48) and recent LFS (≥ 3.6) will give you better performance, more robust authentication, and improved cross-platform behavior (notably Windows/macOS).
- Be aware of the Git-Windows 2.45.1 issue: if your workflows rely on hooks (e.g.
post-checkoutfromgit lfs install) the default safety restrictions can block checkout on fresh clones — and you may need to callgit lfs pullmanually (or adjust environment). - For large / complex repos (many refs, many LFS objects, sparse/partial clones), the combination of Git performance improvements + LFS enhancements should make Git + LFS much more efficient/stable than before 2.43.0.
Upgrading Git
StackOverflow provided the information on how to upgrade Ubuntu. As usual, these instructions also apply to WSL/Ubuntu. If you are typing along, do the following on your server and all your clients.
Add the git-core PPA to the apt sources.
$ yes | sudo add-apt-repository ppa:git-core/ppa PPA publishes dbgsym, you may need to include 'main/debug' component Repository: 'Types: deb URIs: https://ppa.launchpadcontent.net/git-core/ppa/ubuntu/ Suites: noble Components: main ' Description: The most current stable version of Git for Ubuntu.
For release candidates, go to https://launchpad.net/~git-core/+archive/candidate . More info: https://launchpad.net/~git-core/+archive/ubuntu/ppa Adding repository. Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease Hit:2 https://dl.google.com/linux/chrome/deb stable InRelease Get:4 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble InRelease [24.3 kB] Get:5 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble/main amd64 Packages [2,840 B] Hit:3 https://packagecloud.io/github/git-lfs/ubuntu noble InRelease Get:6 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble/main i386 Packages [2,848 B] Get:7 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble/main Translation-en [2,088 B] Fetched 32.1 kB in 1s (35.7 kB/s) Reading package lists... Done N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://dl.google.com/linux/chrome/deb stable InRelease' doesn't support architecture 'i386'
Now update the apt packages and upgrade Git.
$ sudo apt update Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease Hit:2 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble InRelease Hit:3 https://packagecloud.io/github/git-lfs/ubuntu noble InRelease Reading package lists... Done Building dependency tree... Done Reading state information... Done 2 packages can be upgraded. Run 'apt list --upgradable' to see them.
$ sudo apt list --upgradable Listing... Done git-man/noble,noble 1:2.47.1-0ppa1~ubuntu24.04.1 all [upgradable from: 1:2.43.0-1ubuntu7.1] git/noble 1:2.47.1-0ppa1~ubuntu24.04.1 amd64 [upgradable from: 1:2.43.0-1ubuntu7.1]
$ yes | sudo apt upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done Get more security updates through Ubuntu Pro with 'esm-apps' enabled: libcjson1 libavdevice60 ffmpeg libpostproc57 libavcodec60 libavutil58 libswscale7 libswresample4 gh libavformat60 libavfilter9 Learn more about Ubuntu Pro at https://ubuntu.com/pro The following packages will be upgraded: git git-man 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 8,967 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Get:1 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble/main amd64 git amd64 1:2.47.1-0ppa1~ubuntu24.04.1 [6,775 kB] Get:2 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu noble/main amd64 git-man all 1:2.47.1-0ppa1~ubuntu24.04.1 [2,192 kB] Fetched 8,967 kB in 10s (881 kB/s) (Reading database ... 486991 files and directories currently installed.) Preparing to unpack .../git_1%3a2.47.1-0ppa1~ubuntu24.04.1_amd64.deb ... Unpacking git (1:2.47.1-0ppa1~ubuntu24.04.1) over (1:2.43.0-1ubuntu7.1) ... Preparing to unpack .../git-man_1%3a2.47.1-0ppa1~ubuntu24.04.1_all.deb ... Unpacking git-man (1:2.47.1-0ppa1~ubuntu24.04.1) over (1:2.43.0-1ubuntu7.1) ... Setting up git-man (1:2.47.1-0ppa1~ubuntu24.04.1) ... Setting up git (1:2.47.1-0ppa1~ubuntu24.04.1) ... Processing triggers for man-db (2.12.0-4build2) ...
Checking the installed version of Git shows the desired result:
$ git --version git version 2.47.1
References
-
git-lfs.com - Git LFS by Atlassian
- GitLab - Git LFS
- Handling Large Files with LFS
- Man pages.
- The online documentation. Read about the documented limitations.
- A Developer’s Guide to Git LFS
- Set Up a Git LFS Repository by JFrog
- Git Large File System Overview
- How Many Copies Of Large Files on Git LFS Clients?
- Git LFS Client Installation
- Git LFS Server URLs
- Git-ls-files, Wildmatch Patterns and Permutation Scripts
- Git LFS Tracking, Migration and Un-Migration
- Git LFS Client Configuration & Commands
- Working With Git LFS
Instructions for typing along are given for Ubuntu and WSL/Ubuntu. If you have a Mac, the compiled Go programs provided on GitHub should install easily, and most of the textual information should be helpful.