Step-by-Step: Using .netshrink to Reduce Assembly Size and Improve Startup TimeReducing assembly size and improving startup time are common goals for .NET developers shipping desktop, mobile, or server applications. Smaller binaries can reduce download sizes, save storage, and — critically — improve application cold-start times. .netshrink is a tool that helps achieve these goals by shrinking and optimizing .NET assemblies while attempting to preserve runtime correctness. This guide walks through planning, installation, configuration, running, verification, and advanced techniques for using .netshrink effectively.
What .netshrink does (brief overview)
.netshrink is a managed-code shrinking and optimization tool for .NET assemblies. At a high level it:
- Removes unused code and metadata (tree shaking).
- Performs IL-level optimizations and reductions.
- Optionally rewrites or merges assemblies to reduce duplication.
- Offers configuration to preserve reflection, serializers, or other dynamic features.
Note: Shrinking can change binary layout and remove seemingly-unused members that are actually invoked via reflection, configuration, or native interop. Always validate thoroughly after shrinking.
When to consider .netshrink
Use .netshrink when:
- Your deployment size matters (mobile apps, downloads, constrained environments).
- You need faster cold-start or improved JIT throughput by reducing code working set.
- You want to reduce memory footprint or duplicate code across assemblies. Avoid aggressive shrinking on codebases with heavy runtime reflection, dynamic code generation, or extensive IL emit usage unless you can provide accurate preservation rules.
Prerequisites
- A .NET development environment compatible with your target assemblies (e.g., .NET 6/7/8 — check .netshrink documentation for supported runtimes).
- Backups and source control for your project and build artifacts.
- A test suite (unit + integration tests) and manual smoke test plan for scenarios that depend on reflection, configuration-driven behavior, serializers, or plugins.
- Optional: a performance measurement setup to quantify startup improvements (trace tools, Stopwatch, profilers).
Installing .netshrink
- Check the official distribution (NuGet, CLI installer, or GitHub releases) and choose the version matching your target runtime.
- Common installation methods:
- As a global CLI tool (if available):
dotnet tool install --global netshrink
- As a local dotnet tool in your repo:
dotnet new tool-manifest # if you don't have one dotnet tool install netshrink
- As a NuGet package or build task (for CI integration): add the package to your build pipeline or incorporate into MSBuild targets.
- As a global CLI tool (if available):
Basic usage: a step-by-step run
-
Build your project in Release mode:
dotnet build -c Release
-
Identify the target assembly (or assemblies) you want to shrink (e.g., bin/Release/net8.0/MyApp.dll).
-
Run .netshrink with a minimal configuration:
netshrink shrink --input bin/Release/net8.0/MyApp.dll --output bin/Release/net8.0/shrunk/MyApp.dll
- –input: path to assembly (or folder)
- –output: destination for shrunk assembly
- The CLI may accept patterns or multiple inputs.
-
Replace or package the shrunk assembly for testing:
- For quick testing, copy the shrunk output into your runtime folder or create a test package.
- For production packaging, integrate the shrink step into your publish pipeline.
Creating a configuration file
A configuration file lets you control what .netshrink preserves and how aggressive it should be. Typical configuration options:
- Keep rules for types/members used via reflection.
- Preserve XML/JSON serialization contracts.
- Exclude assemblies from shrinking (third-party or native-dependent).
- Set optimization levels (e.g., conservative, balanced, aggressive).
Example minimal JSON (syntax may vary by version):
{ "keep": [ "MyApp.Program", "MyApp.Services.*", "System.Xml.Serialization.*" ], "excludeAssemblies": [ "ThirdParty.NativeBindings" ], "optimizationLevel": "balanced" }
Place this file next to your assembly or reference it in the CLI:
netshrink shrink --config netshrink.json --input ...
Preserving reflection and dynamic usage
The most common shrinking pitfall is removing members only referenced dynamically. Strategies to avoid breakage:
- Add explicit keep rules for types/members accessed by reflection.
- Use “linker-friendly” attributes if supported (e.g., [Preserve] attributes) inside your codebase.
- Enable “reflection analysis” in .netshrink if available — it can infer some dynamic usage from code patterns, but it’s not foolproof.
- For serializers (JSON, XML, protobuf), preserve the model types or configure the serializer to use interfaces/contract-based access less dependent on concrete members.
Example keep rule patterns:
- Exact type: MyApp.Models.User
- Namespace wildcard: MyApp.Models.*
- Member wildcard: MyApp.Models.User::Get*
Integrating into CI/CD
Add a shrink step to your CI pipeline after building and before packaging:
-
Example (pseudo YAML): “`
-
name: Build run: dotnet build -c Release
-
name: Shrink assemblies run: netshrink shrink –config netshrink.json –input bin/Release/net8.0 –output bin/Release/net8.0/shrunk
-
name: Run tests run: dotnet test –no-build “` Run tests against shrunk output (either by swapping assemblies in the test run or having test jobs that reference the shrunk folder).
Verifying correctness
- Automated tests: run unit and integration tests against the shrunk artifacts.
- Manual smoke tests: exercise common UI flows, plugin loading, serialization, and startup.
- Runtime logging: enable detailed log levels to catch missing type/member exceptions (TypeLoadException, MissingMethodException, MissingMemberException).
- Compare behavior and outputs between original and shrunk builds in representative scenarios.
Measuring startup improvement
To quantify improvements:
- Host environment: measure on target hardware (mobile device, VM, or client machine).
- Metrics: cold-start time (time to first UI/display or to run main method), time-to-first-request for services, and memory usage during startup.
- Tools:
- Stopwatch-based instrumentation in Program.Main.
- dotnet-trace, PerfView, or platform-specific profilers.
- Compare multiple runs and average results; shrinking often reduces JIT work and disk I/O which lowers cold starts.
Example simple measurement in Program.Main:
var sw = Stopwatch.StartNew(); // app initialization sw.Stop(); Console.WriteLine($"Startup time (ms): {sw.ElapsedMilliseconds}");
Troubleshooting common issues
- MissingMethodException / TypeLoadException: add keep rules for the missing members or types.
- Serializer failures: preserve model types or switch to contract-based serializers.
- Native interop failures: exclude assemblies that contain P/Invoke signatures or ensure runtime native libraries are present.
- Test failures only after shrinking: analyze stack traces, add targeted keep rules, and re-run.
Advanced techniques
- Assembly merging: merge multiple assemblies before shrinking to enable cross-assembly pruning (be cautious with strong-name signed assemblies).
- Selective aggressive shrinking: apply aggressive shrinking only to non-reflection-heavy libraries.
- Conditional linking: create different shrink profiles for debug/test/release or for different platforms.
- Custom IL transforms: use extension points (if provided) to perform domain-specific reductions.
Best practices checklist
- Start conservative: run with a balanced or conservative profile first.
- Use a configuration file and source-control it.
- Preserve types used by reflection, serialization, or native lookup explicitly.
- Test shrunk artifacts in the same environments your users run.
- Automate shrink + test in CI so regressions are caught early.
- Keep telemetry and detailed logging during initial rollout to detect runtime issues.
Example end-to-end workflow
- Build Release artifacts.
- Run netshrink with config: netshrink shrink –config netshrink.json –input bin/Release/… –output bin/Release/shrunk
- Replace assemblies in a test deployment with shrunk versions.
- Run automated tests and measure startup times.
- If regressions appear, refine keep rules and rerun.
- Once stable, integrate shrink step into production publish pipeline.
Final notes
Shrinking can deliver measurable size and startup improvements but requires careful configuration and validation. Treat .netshrink as part of your build toolchain: iterate on keep rules, test broadly, and measure impact on real devices. With a cautious, test-driven approach you can safely reduce assembly size and improve startup performance.
If you want, tell me your project type (console, web, Xamarin/.NET MAUI, library), target runtime, and any reflection/serialization libraries you use — I’ll suggest a tailored netshrink config and specific keep rules.
Leave a Reply