<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Undocumented Software Development]]></title><description><![CDATA[Documentation for undocumented solutions]]></description><link>https://undocumented.dev/</link><generator>Ghost 2.37</generator><lastBuildDate>Fri, 06 Mar 2026 04:48:28 GMT</lastBuildDate><atom:link href="https://undocumented.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Service Fabric Common-Name Certificate: Cluster Upgrade & Rollover]]></title><description><![CDATA[<p>Updating a Service Fabric cluster certificate using thumbprints involves multiple time-consuming cluster upgrades.  This can be avoided by upgrading the cluster to use common-name instead of thumbprints to reference the certificates.</p><p>Unlike thumbprints, common-name remains the same between certificate versions, therefore updating to a newer version is as simple as</p>]]></description><link>https://undocumented.dev/service-fabric-common-name-certificate-upgrade-and-rollover/</link><guid isPermaLink="false">5e8786880ba4901510eb96e5</guid><category><![CDATA[service-fabric]]></category><category><![CDATA[security]]></category><category><![CDATA[azure]]></category><category><![CDATA[ARM]]></category><dc:creator><![CDATA[Oliver Grimes]]></dc:creator><pubDate>Fri, 03 Apr 2020 19:52:18 GMT</pubDate><media:content url="https://undocumented.dev/content/images/2020/04/ClusterCert.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://undocumented.dev/content/images/2020/04/ClusterCert.jpg" alt="Service Fabric Common-Name Certificate: Cluster Upgrade & Rollover"><p>Updating a Service Fabric cluster certificate using thumbprints involves multiple time-consuming cluster upgrades.  This can be avoided by upgrading the cluster to use common-name instead of thumbprints to reference the certificates.</p><p>Unlike thumbprints, common-name remains the same between certificate versions, therefore updating to a newer version is as simple as registering the certificate with the KeyVault &amp; VMSS.</p><p>Because the cluster configuration does not need to change, there are no cluster upgrades required when updating certificates.  When the new, later expiring certificate has been registered on the VMSS, Service Fabric automatically uses the later expiring version.</p><h3 id="upgrading-to-common-name">Upgrading to Common-Name</h3><!--kg-card-begin: markdown--><p>As described in the <a href="https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn">documentation</a>, the secondary thumbprint should be removed before configuring the cluster &amp; VMSS for common-name:</p>
<blockquote>
<p>If you have two thumbprint's declared in your template, you need to perform two deployments. The first deployment is done before following the steps in this article. The first deployment sets your thumbprint property in the template to the certificate being used and removes the thumbprintSecondary property. For the second deployment, follow the steps in this article.</p>
</blockquote>
<!--kg-card-end: markdown--><p>The following ARM templates highlight the configuration changes that need to be deployed for the common-name upgrade.  Ensure a certificate with the common-name used in your template is either already installed in the VMSS, or is installed as part of this upgrade.</p><p><strong>Cluster</strong>:</p><!--kg-card-begin: markdown--><p>The following config should be removed from <code>cluster &gt; properties</code>.</p>
<pre><code class="language-JSON">&quot;certificate&quot;: {
  &quot;thumbprint&quot;: &quot;[parameters('certificateThumbprint')]&quot;,
  &quot;x509StoreName&quot;: &quot;[parameters('certificateStoreValue')]&quot;
},
</code></pre>
<p>And replaced with:</p>
<pre><code class="language-JSON">&quot;certificateCommonNames&quot;: {
  &quot;commonNames&quot;: [
    {
      &quot;certificateCommonName&quot;: &quot;[parameters('certificateCommonName')]&quot;,
      &quot;certificateIssuerThumbprint&quot;: &quot;&quot;
    }
  ],
  &quot;x509StoreName&quot;: &quot;[parameters('certificateStoreValue')]&quot;
},
</code></pre>
<!--kg-card-end: markdown--><p><strong>VMSS</strong>:</p><!--kg-card-begin: markdown--><p>Replace the the <code>certificate &gt; thumbprint</code> property with <code>commonNames</code> in <code>virtualMachineProfile &gt; extensionProfile &gt; extensions &gt; [type='ServiceFabricNode']</code>.  The result should appear as below:</p>
<pre><code class="language-JSON">&quot;certificate&quot;: {
  &quot;commonNames&quot;: [
    &quot;[parameters('certificateCommonName')]&quot;
  ],
  &quot;x509StoreName&quot;: &quot;[parameters('certificateStoreValue')]&quot;
}
</code></pre>
<!--kg-card-end: markdown--><p>Once the above template modifications have been deployed, the cluster is configured to find certificates by common name. We can now easily automate the certificate update process.</p><p>Certificate update can be performed by adding the new certificate to the VMSS vault using your favourite Azure API.  The following PowerShell can be used to add the new certificate to an existing KeyVault, and add the certificate to the VMSS vault:</p><!--kg-card-begin: markdown--><pre><code class="language-PowerShell">$subscriptionId  = &quot;sub-id&quot;
$vmssResourceGroupName     = &quot;vmss-rg-name&quot;
$vmssName                  = &quot;vmss-name&quot;
$vaultName                 = &quot;kv-name&quot;
$primaryCertName           = &quot;kv-cert-name&quot;
$certFilePath              = &quot;...\.pfx&quot;
$certPassword              = ConvertTo-SecureString -String &quot;password&quot; -AsPlainText -Force

# Sign in to your Azure account and select your subscription
Login-AzAccount -SubscriptionId $subscriptionId

# Update primary certificate within the Key Vault
$primary = Import-AzKeyVaultCertificate `
    -VaultName $vaultName `
    -Name $primaryCertName `
    -FilePath $certFilePath `
    -Password $certPassword

$certConfig = New-AzVmssVaultCertificateConfig -CertificateUrl $primary.SecretId -CertificateStore &quot;My&quot;

# Get VM scale set 
$vmss = Get-AzVmss -ResourceGroupName $vmssResourceGroupName -VMScaleSetName $vmssName

# Add new certificate version
$vmss.VirtualMachineProfile.OsProfile.Secrets[0].VaultCertificates.Add($certConfig)

# Update the VM scale set 
Update-AzVmss -ResourceGroupName $vmssResourceGroupName -Verbose `
    -Name $vmssName -VirtualMachineScaleSet $vmss
</code></pre>
<!--kg-card-end: markdown--><p>Service Fabric will now use the newer, later expiring certificate.  That's it!</p><p><strong>Update</strong></p><p>After using this process for a few months, the cluster &amp; application heath has been fine.  It was observed however that error events were being raised on the nodes by Service Fabric: <code>Failed to get private key file. x509FindValue: {commonName}, x509StoreName: My, findType: FindBySubjectName, Error E_FAIL</code></p><p> The following SO post is from a user with the same issue:</p><!--kg-card-begin: bookmark--><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://stackoverflow.com/questions/51134492/servicefabric-standalone-failed-to-get-private-key-file"><div class="kg-bookmark-content"><div class="kg-bookmark-title">ServiceFabric standalone: Failed to get private key file</div><div class="kg-bookmark-description">I have a standalone ServiceFabric cluster (3 nodes). I created SSL certificate for server and client authorization. Then I assign certificate thumbprint to a cluster config. Everything work okey( c...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon.png?v=c78bd457575a" alt="Service Fabric Common-Name Certificate: Cluster Upgrade & Rollover"><span class="kg-bookmark-author">Denis Azarov</span><span class="kg-bookmark-publisher">Stack Overflow</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon@2.png?v=73d79a89bded" alt="Service Fabric Common-Name Certificate: Cluster Upgrade & Rollover"></div></a></figure><!--kg-card-end: bookmark--><p>Keep updated for a less manual mitigation for this error.</p>]]></content:encoded></item><item><title><![CDATA[📈 A High Performance Time-Series Storage Solution - Part 1]]></title><description><![CDATA[<p>If you have a requirement to consume high frequency time-series data in your application, there are some excellent proprietary time-series database offerings that provide great features and performance.  For many reasons however, using a proprietary system isn't always the most appropriate option - in these cases you might consider building</p>]]></description><link>https://undocumented.dev/a-high-performance-time-series-storage-solution/</link><guid isPermaLink="false">5e9c2a851d42f82544f8534f</guid><category><![CDATA[service-fabric]]></category><category><![CDATA[cosmosdb]]></category><category><![CDATA[time-series]]></category><category><![CDATA[azure]]></category><dc:creator><![CDATA[Oliver Grimes]]></dc:creator><pubDate>Wed, 01 Apr 2020 15:36:00 GMT</pubDate><media:content url="https://undocumented.dev/content/images/2020/04/Timeseries.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://undocumented.dev/content/images/2020/04/Timeseries.jpg" alt="📈 A High Performance Time-Series Storage Solution - Part 1"><p>If you have a requirement to consume high frequency time-series data in your application, there are some excellent proprietary time-series database offerings that provide great features and performance.  For many reasons however, using a proprietary system isn't always the most appropriate option - in these cases you might consider building your own storage and retrieval solution.</p><p>This post details an efficient storage solution for ingesting and querying high-frequency time-series data. The example uses Service Fabric Actors and Azure CosmosDB, however the concept can easily be adapted to your own specific platform and storage system. Service Fabric Actors provide a turn-based access model, this means that actor state can be used as our cache without having to worry about locking to prevent concurrent writes, even across multiple VMs.<br><br>In summary, the solution is a write-back caching strategy that pre-aggregates and compresses time-series into fixed intervals. This reduces the volume of historic records, making historic queries much faster and storage cheaper, while allowing a <em>longer recent history</em> of data to remain in cache for fast access. This strategy is inspired by time-series databases and can easily be implemented using existing resources.</p><p>The solution is geared towards high-frequency telemetry, where the number of writes is high, and the reads are most commonly performed against recent history.  It's also adapted to suit append-heavy workloads, and while it supports updating older records, it may not be the right solution if updates are frequent.</p><h3 id="aggregation">Aggregation</h3><p>Querying large volumes of high-frequency time-series data can be optimised by pre-aggregating data into larger intervals.  For example, querying against a few months worth of 1 minute interval data (at 1440 samples per day) will be less efficient than querying a few months worth of daily data.</p><p>The aggregation interval can be adjusted to suit your application, and should be small enough to reduce the number of updates on existing data, but large enough to significantly reduce the number of records being queried.</p><h3 id="compression">Compression</h3><p>The compression algorithm used in this example was developed by Facebook for their <a href="https://blog.acolyer.org/2016/05/03/gorilla-a-fast-scalable-in-memory-time-series-database/">Gorilla time-series database</a>.  It takes advantage of the inherent repetitive properties of time-series data, storing the delta of deltas for the timestamp and storing only 'meaningful' bits for values, omitting leading &amp; trailing zeros.  For regular-interval sampling, timestamps are reduced to single control bits that specify the interval is the same as the previous.</p><p>This algorithm is very fast and has a high compression ratio for numeric time-series data, however the approach outlined in this post can be implemented using any time-series compression algorithm.</p><blockquote>There are many open-source implementations of gorilla compression, <a href="https://github.com/olivergrimes/BlueEyes">this .NET Core</a> implementation of Gorilla compression has been tried and tested.</blockquote><h3 id="write-back-caching">Write-Back Caching</h3><p>Write-back cache describes a system that immediately caches new data and only moves to long-term storage after a period of time.  This works well for time-series data, as it's common for recent data to be accessed more frequently than historic data. Requests can be served using a combination of the cache and database, concatenating the data before sending a response.</p><p>Write-back caching works well with Gorilla compression, as we can quickly append new values to the latest period of compressed data.  The decreased size of the compressed time-series allows more of it to be stored in cache, meaning we can keep a longer period of recent history, leading to more cache hits.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://undocumented.dev/content/images/2020/04/Writethrough-cache.svg" class="kg-image" alt="📈 A High Performance Time-Series Storage Solution - Part 1"><figcaption>An overview of the caching strategy. Compression is immediately performed on appended data, aggregated into fixed intervals and after a time, moved out of the cache and into a database for historic querying.</figcaption></figure><!--kg-card-end: image--><p>A cache-miss would mean that the remaining data is queried from the database. This will not be as fast as serving data from the cache, however the aggregation of data into longer intervals means that database queries will run much faster for high-frequency time-series.</p><h3 id="service-fabric-actor-cosmosdb-implementation-part-1">Service Fabric Actor &amp; CosmosDB Implementation - Part 1</h3><p>Service Fabric Actors keep state and logic closely coupled, this makes for a great in-memory <strong>cache</strong>. Compression reduces the memory footprint of cached data, and the time period that is cached can be shortened if required. This solution maps each distinct series to a single actor instance.</p><p>CosmosDB is a good <strong>database </strong>choice for scale-able volumes of data as it automatically partitions collections as the data volume grows, dependent on choosing an appropriate partition key.  For this solution, a hash of series id and year is a good partitioning scheme, as we'll be querying individual series from within series-specific actors, the addition of year to the partition key means that partitions will not indefinitely grow as time rolls on.</p><p>Thanks for reading, Part 2 will dive deeper into the implementation detail, including how updates are handled. Keep posted!</p>]]></content:encoded></item><item><title><![CDATA[SignalR Scaleout Using Service Fabric Actor Events]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="overview">Overview</h1>
<!--kg-card-end: markdown--><p>ASP.NET Core SignalR provides an abstraction over websocket connections, making it very easy to get up and running with "real-time" pub/sub functionality. When using SignalR within a Service Fabric application, it's likely there will be multiple stateless service instances that a client could have a websocket connection</p>]]></description><link>https://undocumented.dev/signalr-scaleout-using-service-fabric-actor-events/</link><guid isPermaLink="false">5dc56b27fdef164b44a40054</guid><category><![CDATA[service-fabric]]></category><category><![CDATA[signalr]]></category><category><![CDATA[reliable-actors]]></category><dc:creator><![CDATA[Oliver Grimes]]></dc:creator><pubDate>Wed, 19 Feb 2020 21:20:00 GMT</pubDate><media:content url="https://undocumented.dev/content/images/2019/11/SignalRServiceFabric.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="overview">Overview</h1>
<!--kg-card-end: markdown--><img src="https://undocumented.dev/content/images/2019/11/SignalRServiceFabric.jpg" alt="SignalR Scaleout Using Service Fabric Actor Events"><p>ASP.NET Core SignalR provides an abstraction over websocket connections, making it very easy to get up and running with "real-time" pub/sub functionality. When using SignalR within a Service Fabric application, it's likely there will be multiple stateless service instances that a client could have a websocket connection with.</p><p>This means that when any service in our application publishes events intended for clients, we need a mechanism to distribute these events across each SignalR service instance to ensure all subscribed clients receive the event.</p><!--kg-card-begin: markdown--><h1 id="backplanes">Backplanes</h1>
<!--kg-card-end: markdown--><p>The easiest method of scaling SignalR across multiple servers is to use a backplane. Backplanes broadcast messages to each connected SignalR service, making them unsuitable for some situations, with the potential to <a href="https://docs.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-in-signalr#limitations">become a bottleneck</a> in high traffic environments.</p><p>In scenarios that generate high-frequency or user-specific events, it would be a waste of resources to use a backplane that broadcasts every event to every instance of our SignalR service.</p><p>In these scenarios, publishers should only send messages to the <em>appropriate </em>connected clients that have subscribed to that particular <em>topic</em>.  To allow scalability &amp; availability while avoiding the overhead of a backplane, we need to broker events only between publisher and subscribers.</p><!--kg-card-begin: markdown--><h1 id="reliableactorsandtopics">Reliable Actors And Topics</h1>
<!--kg-card-end: markdown--><p>Service Fabric Actors and Actor Events provide a ready-made pub/sub event system that can be used between Actors &amp; services.  Importantly, Actor Events supports the publishing of events to specific <em>stateless </em>service instances.  This is very useful in scenarios where SignalR is hosted in a stateless ASP.NET Core service.</p><p>The Actor model aligns nicely with pub/sub scenarios, as each Actor instance can be used to represent a specific <em>topic</em>.  For example, we might be building a chat system, where each user is represented by an Actor instance.</p><p>Services can subscribe to specific topics by subscribing to events on a specific Actor instance.  This gives us the ability to publish targeted events to specific service instances, rather than using a backplane.</p><p>This also keeps our network calls within our Service Fabric cluster, negating the requirement for an external backplane resource with associated cost.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://undocumented.dev/content/images/2020/01/Untitled-Diagram--1-.svg" class="kg-image" alt="SignalR Scaleout Using Service Fabric Actor Events"><figcaption>Actor Services &amp; Actor Events provide an internal pub/sub event system that can be used to target specific stateless service instances.</figcaption></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><h2 id="implementation">Implementation</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="topicclient">Topic Client</h4>
<p>In order to leverage Actor Events as a pub/sub system for SignalR, we need to orchestrate subscriptions and events between clients and Actors.  We can wrap up this functionality into a <code>TopicClient</code> class, that can:</p>
<ul>
<li>Subscribe to Topic Actors and persist actor proxies</li>
<li>Map subscribed client connection ids to the appropriate proxies</li>
<li>Receive Topic Actor events &amp; forward to relevant clients using <code>IHubContext</code></li>
<li>Unsubscribe from and remove unused Actor proxies</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/2ed593799605fc23641a37e4661cee3f.js"></script><!--kg-card-end: html--><p>When a Topic Actor publishes an event, any connected <code>TopicClient</code> instances will receive the event.  The <code>TopicClient</code> uses the persisted connection ids and <code>HubContext</code> to forward the message on to the appropriate clients.</p><p>The <code>TopicClient</code> will not create multiple subscriptions to the same Actor, therefore multiple clients subscribed to the same topic, on the same service instance will be served by a shared actor proxy.</p><!--kg-card-begin: markdown--><h4 id="topichub">Topic Hub</h4>
<!--kg-card-end: markdown--><p>This solution is designed for pub/sub scenarios, therefore we'll use a <code>TopicHub</code> base class to provide the common <code>Subscribe</code>, <code>Unsubscribe</code> and <code>OnMessage</code> functionality for a particular type of subscription.</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/549e2505e391f09a0c27106200ab6f8b.js"></script><!--kg-card-end: html--><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/42066499396565725b5a7b5779e2a666.js"></script><!--kg-card-end: html--><p>All that's then required is to inherit this base class and specify the types used for each subscription, for example:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/3977d8ddb01f32d19a31ce151e1f7f99.js"></script><!--kg-card-end: html--><p>Full source, documentation and a working example can be found in <a href="https://github.com/olivergrimes/servicefabric-topicactor-signalr">this GitHub repository</a>. There's also a <a href="https://www.nuget.org/packages/servicefabric-topicactor-signalr/">Nuget package</a>.</p><!--kg-card-begin: markdown--><h1 id="tradeoffs">Trade Offs</h1>
<!--kg-card-end: markdown--><p>It's worth noting that Actor Events are described in the <a href="https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-events">documentation</a> as only being "Best effort".  I presume this is down to the transient nature of Actors across nodes between upgrades and fail-overs.  I couldn't find any more detail on that distinction, however as this solution was designed to support some snazzy real-time UI features (nothing business-critical), this risk is acceptable.</p><p>Any Actor Proxies within a <code>TopicClient</code> will be lost if our SignalR service stops, however if this happens, client hub connections will also get disconnected and can then re-connect to another service instance which will subscribe to the appropriate Actors.</p><p>Event publishing services within our application are not aware of connected clients and will publish to the Topic Actor whether there are any subscribers or not.  This service remoting overhead is acceptable, given that after this point the Actor event will only propagate to Actor Event subscribers.</p><!--kg-card-begin: markdown--><h1 id="summary">Summary 🏁</h1>
<!--kg-card-end: markdown--><p>SignalR allows us to create websocket connections between client and server.  Within a Service Fabric application, services hosting SignalR can have multiple instances, however an individual client will only be connected to one instance.</p><p>Backplanes offer a quick and easy solution to replicate events across all service instances, however they can become a bottleneck when an application generates high-frequency, user-specific events.</p><p>Events can be directly brokered to only the appropriate SignalR service instances using Actor Events in combination with a <code>TopicClient</code> broker.</p><p>This solution is scalable and flexible enough to support many different event types that an application may generate.  It also leverages the existing Service Fabric cluster, rather than an external service with potential extra cost.</p><p>Full source, documentation and demo SF app can be found <a href="https://github.com/olivergrimes/servicefabric-topicactor-signalr">here</a>.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Redux Connected Higher-Order React Component using TypeScript 👨‍🍳]]></title><description><![CDATA[<p>While working on a feature-toggling system that allows users to switch preview features on and off, I wanted to create a <code>featureAware</code> enhancer-type HOC that I could use to wrap existing enhancers, e.g. <code>withRouter</code> &amp; <code>connect</code>:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/4160235e7da6bcbacdfb35e54ddb9bf1.js"></script><!--kg-card-end: html--><p><br>I also wanted this enhancer to work on it's own:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/9a24052fcdbecbe766300578bc09bbf9.js"></script><!--kg-card-end: html--><p>In this case</p>]]></description><link>https://undocumented.dev/redux-connected-higher-order-react-component-with-typescript/</link><guid isPermaLink="false">5e3bde7bcdae3820d0eb9d0a</guid><category><![CDATA[React]]></category><category><![CDATA[Redux]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Oliver Grimes]]></dc:creator><pubDate>Thu, 06 Feb 2020 09:55:37 GMT</pubDate><media:content url="https://undocumented.dev/content/images/2020/04/Oliver.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://undocumented.dev/content/images/2020/04/Oliver.jpg" alt="Redux Connected Higher-Order React Component using TypeScript 👨‍🍳"><p>While working on a feature-toggling system that allows users to switch preview features on and off, I wanted to create a <code>featureAware</code> enhancer-type HOC that I could use to wrap existing enhancers, e.g. <code>withRouter</code> &amp; <code>connect</code>:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/4160235e7da6bcbacdfb35e54ddb9bf1.js"></script><!--kg-card-end: html--><p><br>I also wanted this enhancer to work on it's own:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/9a24052fcdbecbe766300578bc09bbf9.js"></script><!--kg-card-end: html--><p>In this case the HOC is used to inject a <code>hasFeature</code> function into enhanced components:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/74c0903df0c8d3a409c146237e5cbd40.js"></script><!--kg-card-end: html--><p>This application already used Redux, and the selected feature-toggles were being persisted to the server.  I wanted feature-toggle updates to instantly show or hide features in the application.</p><p>Components using the <code>featureAware</code> enhancer HOC can use the <code>hasFeature</code> function to allow conditional feature-toggle based behaviour.</p><p>After a bit of work with types, taking inspiration from <code>withRouter</code>, I created the following HOC that can be used either on it's own or by wrapping other enhancers. It also allows pass-through of component-level props (like <code>ownProps</code> in Redux <code>connect</code>):</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/0d41f4119b7bd15e3cb76de1eb3f3869.js"></script><!--kg-card-end: html--><p>I briefly tried, but couldn't seem to avoid that ugly <code>unknown</code> props cast or the cast to <code>React.Component</code>. It works nicely however, the casts don't prevent the correct types from being inferred when using the component.</p>]]></content:encoded></item><item><title><![CDATA[🔐 Declarative Resource-Based Authorisation With ASP.NET Core]]></title><description><![CDATA[Resource-based authorisation in ASP.NET Core applications can be performed in a declarative manner using custom attributes and action filters.]]></description><link>https://undocumented.dev/declarative-resource-based-authorisation-with-asp-net-core/</link><guid isPermaLink="false">5dc56b27fdef164b44a40056</guid><category><![CDATA[asp.net]]></category><category><![CDATA[webapi]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Oliver Grimes]]></dc:creator><pubDate>Thu, 07 Nov 2019 13:29:07 GMT</pubDate><media:content url="https://undocumented.dev/content/images/2019/11/ActionFilterAuthorisation.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://undocumented.dev/content/images/2019/11/ActionFilterAuthorisation.jpg" alt="🔐 Declarative Resource-Based Authorisation With ASP.NET Core"><p>It is common for web applications to have many different endpoints/actions with different authorisation requirements. For example, the API behind this blog only allows users with a sufficient role to create new posts.</p><p>In ASP.NET Core applications, the framework provided <code>[Authorize]</code> attribute allows claims-based authorisation, however this doesn't take into account the resource being accessed. An application may have user-specific resources, be multi-tenant or have some other custom security model. In these cases - authorisation often needs to be granted based on resource-level access.</p><p><a href="https://docs.microsoft.com/en-us/aspnet/core/security/authorization/resourcebased?view=aspnetcore-3.0#use-imperative-authorization">Imperative-style authorisation</a> using <code>IAuthorizationService</code> can be used in these circumstances, however with this method we lose the descriptive, declarative simplicity of attribute-based authorisation.</p><p>This post provides an example of how we can create custom Action Filters and Attributes, to provide a flexible, efficient and descriptive method of resource authorisation.</p><h3 id="authorisation-rule-attributes">Authorisation Rule Attributes</h3><p>We can use attributes to decorate each endpoint with a rule that's specific to the request content and resource being accessed.</p><p>The authorisation rule can be simplified to the following interface:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/017e7a5c8b1955a0fc0950b5340a4124.js"></script><!--kg-card-end: html--><p>We can then create an attribute class that implements the interface we have defined:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/2b08b86ef16c143106070bbdcccc9c33.js"></script><!--kg-card-end: html--><p>The example above uses the <code>idArgumentName</code> value from the constructor to extract the requested <code>id</code> from <code>context.ActionArguments</code>. It uses an  <code>IItemLookup</code> and <code>IUserIdProvider</code> service (both examples, not framework-derived!) from the service provider, to check if the current user has access to the resource with the <code>id</code> extracted from the request.</p><p>The <code>ItemOwner</code> attribute can then be used to decorate endpoints that require the user to own the resource they are accessing:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/7425c57577a108f1a66e37007aaa4e31.js"></script><!--kg-card-end: html--><p>With the simple <code>IAuthorisationRule</code> interface, it's trivial to create multiple implementations of these attributes, each tailored to a specific resource security requirement.</p><h3 id="authorisation-filter">Authorisation Filter</h3><p>In it's current state, the example above offers no protection as it will not be run as part of the <strong>request pipeline</strong>. We can use Action Filters to intercept the request, consume our <code>IAuthorisationRule</code> attributes and check user access before allowing the request to proceed.</p><p>Create an Action Filter and register it globally to intercept every request to our application:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/a62b84051a2fe1659c3bce43d01994fd.js"></script><!--kg-card-end: html--><p>Register the filter in <code>Startup</code></p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/bf607be2254be84de70a5efcaa4862ea.js"></script><!--kg-card-end: html--><p>Now that this filter has been registered globally, our earlier example will now become operational:</p><!--kg-card-begin: html--><script src="https://gist.github.com/olivergrimes/7425c57577a108f1a66e37007aaa4e31.js"></script><!--kg-card-end: html--><p>The <code>ResourceAuthorisationFilter</code> will now run the <code>HasAccess</code> method on each of the <code>IAuthorisationRule</code> type attributes that the action is decorated with. With the current setup, any action not decorated with an <code>IAuthorisationRule</code> type attribute will not be impacted.</p><p>With this basic interface, <strong>we can easily extend our collection of rules</strong> to provide authorisation logic to check user access against new controller actions, with different resource access requirements. This solution provides a neat, reusable, declarative approach to resource-based authorisation.</p><h2 id="considerations">Considerations</h2><p>It's important to consider some of the drawbacks when using this declarative approach. We're accessing some information about the resource <em>before the action body is hit</em>, then it's likely that this resource will be accessed again within the action. </p><p>This disconnect between the authorisation filter and the action could be the source of security issues if the resources being accessed are different between the two. Security checks taking place in the authorisation filter <em>must </em>cover all request parameters that can subsequently be used to access resources. In any case - it's good practice to create endpoint integration tests that ensure the rules are correctly applied.</p><h2 id="-summary">🏁 Summary</h2><p>Resource-based authorisation in ASP.NET Core applications can be performed in a declarative manner using custom attributes and action filters. </p><p>Decorating endpoints/actions with attributes in this way greatly simplifies the implementation of authorisation, making it very easy to apply authorisation rules to new endpoints within the application.</p><p>Using a common <code>IAuthorisationRule</code> attribute interface alongside a global action filter gives developers the flexibility to create authorisation rules that are specific to the security model of the application.</p><!--kg-card-begin: hr--><hr><!--kg-card-end: hr--><p>Full source code and an example can be found in <a href="https://github.com/olivergrimes/ResourceBasedAuthorisation">this github repo</a>.</p>]]></content:encoded></item></channel></rss>