How to Junk or Delete Any Email Where the From Name Wildcard


✅ Example How to Junk Any Email Where the From Name Contains “BlueCross”

Outlook lets you create a rule using:

“with specific words in the message header”.

The message header includes the From name, so this works even if the sender name varies:

  • BlueCross
  • BlueCross Hospital
  • BlueCross Billing
  • BlueCross Anything

As long as “BlueCross” appears anywhere in the header, the rule will match.


🛠️ Steps (Windows Outlook desktop)

  1. Go to File → Manage Rules & Alerts
  2. Click New Rule
  3. Choose Apply rule on messages I receive
  4. Under conditions, check:
    “with specific words in the message header”
  5. Click specific words
  6. Add: BlueCross
  7. Click Next
  8. Under actions, choose:
    “move it to the specified folder” → select Junk Email
  9. Save the rule

This will catch any email where the From name contains “BlueCross”.


⚠️ Important limitation (from Microsoft)

Rules do not run on messages that Microsoft’s server already places in Junk.
So this rule only applies to messages that land in your Inbox.

But for your use case — catching variations of a sender name — the header‑word rule works perfectly.


What is C:\inetpub\DeviceHealthAttestation\bin\hassrv.dl

Yes — C:\inetpub\DeviceHealthAttestation\bin\hassrv.dll is a legitimate Windows component. It belongs to the Device Health Attestation (DHA) service, a Microsoft security feature introduced in Windows 10 and Windows Server 2016. The DLL is part of the server role that validates a device’s Trusted Platform Module (TPM) and boot measurements, then issues a health attestation report.


🔍 What Device Health Attestation Does

  • Purpose: DHA verifies that a device booted securely (e.g., Secure Boot, BitLocker, ELAM checks) and that the Windows kernel wasn’t tampered with.
  • Integration: It works with Mobile Device Management (MDM) solutions and Active Directory/Entra ID to enforce compliance policies.
  • Deployment: Microsoft offers DHA as:
    • A cloud service (Microsoft-managed, free).
    • An on-premises server role (installed on Windows Server, where hassrv.dll is part of the role).
    • An Azure-hosted service.

📂 Why You See It in C:\inetpub\DeviceHealthAttestation

  • The DHA server role installs into IIS (inetpub) because it exposes a web API endpoint for devices to submit attestation requests.
  • The hassrv.dll file is the Health Attestation Service library that processes those requests.
  • Reports from admins confirm that copying hassrv.dll from Server 2016 is sometimes used to fix DHA role issues on newer Server versions.
  • Microsoft’s own documentation and patch notes reference this file as part of the DHA feature.

✅ Safety Assessment

  • Safe if installed by Windows Update or Server Role Manager: It’s a signed Microsoft DLL, not malware.
  • Check digital signature: Right‑click → Properties → Digital Signatures tab. It should be signed by Microsoft Windows.
  • Context matters: If you didn’t manually place it there and it appeared after a Windows update or server role installation, it’s expected and safe.

In short: hassrv.dll is a normal part of Microsoft’s Device Health Attestation service. If it’s digitally signed by Microsoft and located under C:\inetpub\DeviceHealthAttestation\bin\, you can consider it safe.

Confetti Celebration for Webpage

Confetti celebration to trigger only once per session, even if the user reloads the page multiple times. The simplest way is to use sessionStorage (built into browsers). It persists for the current browser tab/session but resets when the tab is closed.

<script src="https://cdn.jsdelivr.net/npm/canvas-confetti@1.6.0/dist/confetti.browser.min.js"></script>
<script>
document.addEventListener("DOMContentLoaded", function() {
  // check if celebration has already run in this session
  if (!sessionStorage.getItem("celebrationDone")) {
    setTimeout(function() {
      confetti({
        particleCount: 150,
        spread: 70,
        origin: { y: 0.6 }
      });
      // mark as done
      sessionStorage.setItem("celebrationDone", "true");
    }, 2000);
  }
});
</script>

Fix Microsoft CO-PILOT on windows 10

Here are a few troubleshooting steps you can try to get your Microsoft Copilot (on Windows 10) working again:

✅ Quick things to check

  1. Internet connection — Make sure you’re online and the connection is stable.
  2. Updates — Check that Windows 10 is fully updated, and that Copilot (or the app/feature) itself is up to date.
  3. Account & region — Ensure you’re signed in with your Microsoft account and that your region/language settings are correct. Some users reported odd behaviour when the region/language was mismatched.
  4. Firewall/Antivirus — It’s possible your security software is blocking Copilot’s connections. Temporarily disable or whitelist it to test.

🛠 Deeper fixes to try

  • Go to Settings → Apps → Installed apps, find “Copilot” (or “Microsoft Copilot” / “Copilot for Windows”), click Advanced options and then Repair or Reset. Microsoft Learn+1
  • If you’re using Microsoft Edge and Copilot integrates there or via web, try clearing Edge’s cache or using a new Edge profile. Corrupt browser profiles/extensions have caused issues. Microsoft Learn+1
  • If you’re on a work or school PC: check if any group policy or network appliance is restricting TLS versions or intercepting secure traffic — this has caused Copilot to fail. Microsoft Learn+1

In MySQL MyISAM tables how do you free up disk space after deleting a lot of records in a table?

In MySQL with MyISAM tables, deleting rows does not automatically shrink the physical .MYD data file. To reclaim disk space after large deletions, you need to run:

OPTIMIZE TABLE your_table_name;

🔧 What OPTIMIZE TABLE does for MyISAM:

  • Rewrites the entire .MYD file (data file) and .MYI file (index file).
  • Removes unused space left by deleted rows.
  • Reorders and defragments indexes.
  • Frees up disk space and improves performance.

🛠️ Example:

OPTIMIZE TABLE suppression.mailserverdomains_segfilter;

After running this, check the file size:

ls -lh /var/lib/mysql/suppression/mailserverdomains_segfilter.*

You should see .MYD shrink if deletions were significant.


⚠️ Notes:

  • This operation locks the table during optimization.
  • It can be slow on large tables — consider running during off-peak hours.
  • Works only for MyISAM — for InnoDB, you’d use ALTER TABLE ... ENGINE=InnoDB or rely on innodb_file_per_table.

PHP Script to Delete from Supabase Storage

Here’s a clean, actionable PHP script that connects to an S3-compatible storage system (e.g., AWS S3, MinIO, DigitalOcean Spaces, Supabase Storage with S3 API enabled) and deletes objects older than 6 months.

It uses the official AWS SDK for PHP (aws/aws-sdk-php), which works with any S3-compatible endpoint.


📜 PHP Script

<?php
require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\Exception\AwsException;

// Configure your S3-compatible storage
$s3 = new S3Client([
    'version'     => 'latest',
    'region'      => 'us-west-1', // adjust if needed
    'endpoint'    => '', // e.g. https://s3.amazonaws.com or https://nyc3.digitaloceanspaces.com
    'use_path_style_endpoint' => true, // often required for non-AWS S3
    'credentials' => [
        'key'    => '',
        'secret' => '',
    ],
]);

$bucket = 'faceless-images';

// Calculate cutoff date (12 months ago in your example)
$cutoff = new DateTime();
$cutoff->modify('-12 months');

try {
    // List all objects in the bucket
    $objects = $s3->getPaginator('ListObjectsV2', [
        'Bucket' => $bucket,
    ]);

    foreach ($objects as $page) {
        if (!isset($page['Contents'])) {
            continue;
        }

        foreach ($page['Contents'] as $object) {
            $key = $object['Key'];

            try {
                // Fetch object metadata
                $head = $s3->headObject([
                    'Bucket' => $bucket,
                    'Key'    => $key,
                ]);

                $meta = $head['Metadata'];

                if (isset($meta['created-at'])) {
                    $createdAt = new DateTime($meta['created-at']);

                    if ($createdAt < $cutoff) {
                        echo "Deleting: {$key} (Created at: {$createdAt->format('Y-m-d')})\n";

                        $s3->deleteObject([
                            'Bucket' => $bucket,
                            'Key'    => $key,
                        ]);
                    }
                } else {
                    // Fallback: if no created-at metadata, you could skip or use LastModified
                    // $lastModified = new DateTime($object['LastModified']);
                }

            } catch (AwsException $e) {
                echo "Error fetching metadata for {$key}: " . $e->getMessage() . "\n";
            }
        }
    }

    echo "Cleanup complete.\n";

} catch (AwsException $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

⚙️ How It Works

  1. Connects to your S3-compatible endpoint with credentials.
  2. Lists all objects in the bucket (paged).
  3. Compares each object’s LastModified timestamp against the cutoff (6 months ago).
  4. Deletes files older than 6 months.

✅ Notes

  • Install the AWS SDK with: composer require aws/aws-sdk-php
  • Replace endpoint, key, secret, and bucket with your actual values.
  • If you’re using AWS S3 itself, you can omit the endpoint and use_path_style_endpoint options.
  • For Supabase Storage, you’d need to enable the S3-compatible API and use the provided endpoint/keys.

To flush the DNS cache on an Ubuntu server

To flush the DNS cache on an Ubuntu server, the method depends on which DNS resolver your system is using. Here’s how to do it for the most common setups:


🧼 For Ubuntu 22.04+ (Using systemd-resolved)

Ubuntu now uses systemd-resolved for DNS caching. To flush the cache:

sudo resolvectl flush-caches

You can verify the cache was cleared by checking statistics:

resolvectl statistics

Look for Cache Current Size: 0 to confirm the flush.


🧼 For Ubuntu 20.04 and Earlier (Using systemd-resolve)

If your system still uses the older command:

sudo systemd-resolve --flush-caches

This is functionally similar and works on many 20.04 setups.


🧼 If You’re Using dnsmasq

Some Ubuntu setups (especially custom or lightweight ones) use dnsmasq:

sudo systemctl restart dnsmasq

This restarts the service and clears its cache.


🧼 If You’re Using bind9

For servers running BIND:

sudo rndc flush

Or to flush a specific zone:

sudo rndc flushname example.com

🧠 Pro Tip

If you’re troubleshooting DNS issues, you might also want to clear your local resolver cache and browser cache, or test with:

dig example.com

to bypass local caching and query directly.

Cloudflare ERR_SSL_VERSION_OR_CIPHER_MISMATCH

This error usually means Cloudflare can’t establish a secure connection with your origin server due to an SSL/TLS mismatch. Since you were previously using SSL on Apache and just moved to Cloudflare, here’s how to troubleshoot it step-by-step:


🔍 What the Error Means

ERR_SSL_VERSION_OR_CIPHER_MISMATCH indicates that:

  • The browser and server don’t agree on a common SSL/TLS protocol or cipher.
  • Cloudflare can’t present a valid certificate for your domain.
  • Your origin server may be misconfigured or using outdated SSL settings.

✅ Fixes You Should Try

1. Check Cloudflare SSL Mode

Log into your Cloudflare dashboard:

  • Go to SSL/TLS > Overview
  • Make sure SSL mode is set to Full or Full (Strict)
    Avoid Flexible, as it can cause issues with sites that already have SSL.

2. Verify Edge Certificate Status

  • Go to SSL/TLS > Edge Certificates
  • Look for the Universal SSL certificate for your domain.
  • Make sure its status is Active
    If it’s still initializing, it can take up to 24 hours after DNS propagation.

3. Ensure DNS Records Are Proxied

  • Go to DNS > Records
  • Make sure your A or CNAME records for site.com are set to Proxied (orange cloud icon).
    If they’re DNS only, Cloudflare won’t serve SSL for that subdomain.

4. Check Apache SSL Configuration

On your origin server:

  • Make sure Apache is listening on port 443.
  • Confirm that your SSL certificate is valid and not expired.
  • Ensure you’re using modern TLS protocols (e.g., TLS 1.2 or 1.3) and not deprecated ones like SSLv3 or TLS 1.0.

Example Apache config snippet:

<VirtualHost *:443>
    ServerName site.com
    SSLEngine on
    SSLCertificateFile /path/to/cert.pem
    SSLCertificateKeyFile /path/to/key.pem
    SSLCertificateChainFile /path/to/chain.pem

    SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
    SSLCipherSuite HIGH:!aNULL:!MD5
</VirtualHost>

5. Temporarily Pause Cloudflare

If you need to test directly:

  • Go to Overview > Advanced Actions
  • Click Pause Cloudflare on Site
  • Then access your site directly via its IP or domain to confirm SSL is working.

Debugging a 500 Server Error in Laravel (Nginx + PHP-FPM)

When Laravel throws a 500, you’ll need to trace it through three layers: Laravel itself, Nginx, and PHP-FPM. Here’s where to look and how to pinpoint the issue.


1. Turn on Laravel Debug

  1. Open your project’s .env file.
  2. Set APP_DEBUG=true APP_ENV=local
  3. Save and reload your page.
    – You should now see a detailed stack trace instead of the generic 500 page.

2. Inspect Laravel Logs

Laravel logs errors in storage/logs. Usually you’ll find:

cd /home/mailivery/mailivery
ls storage/logs
tail -n 50 storage/logs/laravel.log

– Look for the latest timestamped file (laravel-YYYY-MM-DD.log).
– Errors, exceptions, SQL problems and failed queue jobs all land here.


3. Check Nginx Error Logs

By default Nginx writes to:

/var/log/nginx/error.log

Or, if you have a per-site config:

/var/log/nginx/your-site-error.log

Tail it live as you refresh:

sudo tail -f /var/log/nginx/error.log

Common culprits: wrong root path, missing index.php, bad fastcgi_pass.


4. Review PHP-FPM Logs

Depending on your PHP version, you’ll have one or more of:

/var/log/php7.4-fpm.log
/var/log/php8.1-fpm.log

Or check the pool-specific log in /etc/php/8.1/fpm/pool.d/www.conf for error_log.

Tail it:

sudo tail -f /var/log/php8.1-fpm.log

Look for PHP parse errors, call-stack dumps, or FPM pool failures.


5. Verify File & Folder Permissions

Laravel needs write access to storage/ and bootstrap/cache:

cd /home/mailivery/mailivery
sudo chown -R www-data:www-data storage bootstrap/cache
sudo chmod -R 750 storage bootstrap/cache

If PHP-FPM runs as a different user, swap www-data accordingly.


6. Final Tips & Next Steps

  • Use Laravel’s Log::info() or dd() to pinpoint logic errors.
  • Install Laravel Telescope or Sentry for deeper runtime telemetry.
  • If you suspect opcache or stale files, restart PHP-FPM: sudo service php8.1-fpm restart
  • For systemd-enabled servers, view FPM via journalctl -u php8.1-fpm.

Once you find the first error line in Laravel’s log or PHP-FPM’s stack, you’ll know exactly what to fix. Happy debugging!