Category Archives: troubleshoot

WordPress and Mail-icious Logs

mariusz_mail

 

Remember a couple months back when I wrote about my ridiculous error log woes? Well, I must be the Queen of Crappy Logs because all that fun slammed back into my life in the form of mail log overload. So what gives!? I’ve cleaned my code on older CakePHP installations, added a new WordPress installation for a new project, swapped out any standard wp_mail() functionality for Mandrill API fun, and followed my usual routines for new websites. WHY IS MY SERVER ON FIRE!?

Apparently this hosting-space-savvy developer needed a lesson in managing a server on her own. I got to see first-hand what happens when you disregard any anti-spam precautions when firing up a new WordPress site. I’ve known about Akismet or Mollom, but never thought they were super necessary. I rarely see the deep admin side of my hosted servers so out of sight out of mind, right? NOT ANYMORE! This is now my server I need to protect; I can’t just turn a blind eye while the mail log file grows 1GB/half hour and crunches all CPU. MUST. SAVE. THE WORLD. THE SERVER!

At first, I didn’t realize this was spam. I naturally thought back to my error fun from weeks ago and approached this instance as another flare-up. I start by referencing my friend cat /dev/null >mail.log to help clear some space. This moment of clarity lasts seconds, as the spam attempts continue to overflow the screen. I figure, “Hey, I have Mandrill hooked up. Why is Postfix even trying to work without an active local mail server?” This thought leads me to turning off/stopping Postfix.

Nothing.

OKAY! I’ll completely unload Postfix so HA! Won’t even be an option for ya now, server. Take THAT. Wait…what’s it doing? It’s using a DIFFERENT mail option!? Ninety-two kill PID attempts later and holy hell I’ll put back Postfix. Cripes.

::spam logs continue and I swear they’re even faster/angrier::

WHY WON'T YOU DIE

 

…ugh okay. Now what!? Why is this thing so angry! I start to wonder if it’s in fact the WordPress installation. Maybe if I turn off the site for a moment? I flip the switch — silence. Mk, so it’s definitely this WordPress site. This pause gives me a moment to actually read the lines in this out-of-control log stream. That’s when I notice a bunch of phony email addresses using our WordPress site as a domain name. I definitely don’t have any emails listed for that domain name so finally the spam lightbulb goes off. Great! At least I have a clue!

Time to bust out the anti-spam guns. The most popular solution is Akismet so I went there and got a key. Back in my day ::shakes a cane::, Akismet was free to anyone. I can understand that their popularity and success has resulted in them asking for some maintenance help. Anything would be worth a quiet mail log at this point. I snag a key and pop it into place in my WordPress installation. I also decide to shove a Captcha plugin even though I disabled all commenting for good measure. It’s amazing how resilient these spam bots are even when all commenting/pinging functionality is turned off.

Welp! It’s not complete log silence, but I can see a shift in the log messages. Instead of trying to send these spam mails through a dormant mail server, I see Akismet doing it’s job of double-checking each one and rejecting accordingly. Lovely! I may still be racking up a sizable log file over time, but at least my server CPU is back to normal and free again. Hooray!

Now that I have some time to consider alternative measures, I can’t help but question if Akismet is the best solution for WordPress spam protection. They are very robust now and provide a great third-party check, but is that server-to-server jump so necessary for the sake of some phony email? For now, I believe it is. I can see myself shifting my stance once I get my feet wet and more familiarized with the whole spam front. There are many alternatives out there that can keep you within your own server; privacy-crazed developers rejoice! Be sure to check them out for yourself and save yourself from this chaotic, spammy mess!

That’s all for now! Happy coding :)

 

Back to Stack Dog

A Kick in the Bash

stack_terminal

 

Well, well, well. This “I-need-a-GUI-to-make-sure-I’m-not-hosing-files” girl was forced to dive off into Terminal/UNIX land to fix a rather daunting bug. Due to the successful outcome, I feel more than obligated to document it and save it for [hopefully unnecessary] later use.

Long story short, I had to compress and properly format thousands of large photos being used on my server without renaming/moving them. A recent update to a camera system caused photos to upload to the server as TIFFs and not JPEGs and this has been going on since November. Awesome! So here I am with thousands of enormous photos that are being used by a live site. Sweet setup, right?

So after pacing about for a while and considering the daunting task of manually fixing all the files i.e. my personal hell, I ‘hit the books’ and started sniffing out bash scripts and image conversion tools.

Fortunately, my coworker is rather familiar with the fun of bash scripting thus he had immediate tips to get me on the right path.

Step 1: ImageMagick (link to Mac installer)…Whew, does this thing do some work! Soooo many tools and tricks for playing around with image files. The ‘convert’ command is likely the ideal means of altering files, as you can maintain original files. Buuuuut we, too, like to live dangerously. Bring in Mogrify! I promise I remained calm while reading that “this tool is similar to ‘convert’ except that the original image file is overwritten.” ::cough:: No going back? GREAT! Sounds like a solid idea for a newbie. Yeesh. Unfortunately, it’s exactly what I want haha advocacy for heavy pre-testing? Sure, why not.

Step 2: Get a testing area and play! Get that mogrify up and running and see what happens.

Immediate issue #1: Our image filename convention is simple for dynamic functionality. Basically indexing six images sequentially with the unique product number for quick pulling. This seems to clash with the ‘update-in-place’ effect of mogrify. It did the trick for most files, but had a tendency of renaming the reformatted files to [filename]-0.jpg and creating a preview file called [filename]-1.jpg. So it’s clearly indexing, too; not ideal. I definitely needed the image names to stay the same! Meh, time for some extra script lines.

Immediate issue #2: We don’t want to alter ALL the files in this giant directory (100+GB of files), but just the ones that are large ‘n in charge. SO! We need to single out the files that are over 1.3MB and THEN run some script magic. This would essentially single out the mixed up files dating back to November. We have our friend “-size” to do the trick. Perfect!

The rest was just cleaning up the indexing mess from mogrify by deleting/chopping the files. Nothing some -exec and -delete commands couldn’t handle. After some tinkering with this guy, we managed to get a final, working script:

#!/bin/sh
cd /Path/To/Stuff/You/Wanna/Change
find . -size +1300000c -name "*.jpg" -exec mogrify -format jpg {} \;
find . -name "*-1.jpg" -delete
find . -name "*-0.jpg" -exec bash -c 'mv "$1" "${1//\-0}"' -- {} \;

Line 1: Get to the main photo directory.
Line 2: Find files that are over 1.3 MB that are JPGs and reformat them to true JPGs (again, these mixed up files are TIFFs with JPG extensions; nice ‘n messed up, right?).
Line 3:  Find all the extra files that now have the mogrify “-1.jpg” convention and delete them.
Line 4: Find all the reformatted, desired files with “-0.jpg” convention and simply chop the “-0” from the name. Voila!

We setup a few test runs on the Desktop to play it safe. Turned out nicely! Dropped files from 1.5MB to ~100kb in one swoop while maintaining the original filename. Awesome.

Time to saddle up and run the real deal, right? Wait!

Step 3: BACK UP EVERYTHING AND ITS MOTHER. Despite my surge of confidence and blatant excitement over this script, I knew things could still hit the fan. A simple download of Carbon Copy Cloner did the trick for us! After a nice ‘n safe back-up, NOW we can pull the trigger. If you are used to bash scripting, you can skip to Step 5. Otherwise…

Step 4: UNDERSTAND how bash scripts execute! This is an important step I missed prior to all of this fun. I really haven’t done anything drastic (or at least multi-lined) with shell scripting and thus missed the fact that each line loops through the entire directory before moving to the next line. I’m so used to languages looping through all steps before moving to the next index, but not the case here. This tidbit of info would have likely saved me a lot of unnecessary anxiety when I watched it run for real!

Step 5: HIT IT! Well, okay…make sure your ‘cd’ line is correct but then pull the trigger! Stop waiting and worrying; just do it. You have the right script lines, you did the testing, you have the clone…you’re good to go! Do it!

Finally: Sit in fear for the duration unless, of course, you went through Step 4 haha. I nervously watched the system churn its way through thousands of files. Skipping some and drastically and irreversibly altering others. All in hopes that it’ll come out smoothly in the end. Keep an eye on what it’s doing and where it’s venturing, as you may have some slight cleanup for directories that fit the mold of those you wanted to fix. Plus, you should watch out for any permissions clashes, which could easily hose all alterations. Fortunately, the script covered 95% of the necessary changes and I was able to clean-up the last 5% within a few minutes.

So that was it! My first true-blue run with a hefty bash script. My website is happy again with its proper images restored and I can relax knowing that everything is in working order. Back to laid-back PHP for now; I need to decompress!

Happy coding!

Back to Stack Dog

Moving Old Sites to Mavericks Server

I recently upgraded a web server from Snow Leopard to Mavericks (subtle upgrade ha) and naturally anticipated a few headaches. Fortunately, the upgrade went rather smoothly! Mavericks loaded right up; server app made everything simple; a quick MySQL download got my databases in place; and the existing websites got nestled nicely in their proper locations. All gravy!

::one day later::

Server is crashing! What!? What is going on? Ah, maybe it’s that 90GB apache error_log file! FUN! So what gives?

Should have figured that moving CakePHP files circa 2009 to a brand new server with the latest PHP would be a BIT of a clash. A ‘bit’ = monumental. PHP recently bundled their E_STRICT error reporting with E_ALL. You used to be able to segment them off or call them separately, but now you’re getting ALL strict reports in one swoop. Needless to say, the default settings in all my older CakePHP sites are not up on this change. Holy logging, batman! Every little move on each website is logging at least 10 PHP Strict errors at a time. The log viewer is struggling, the OS is lagging, everything is taking a hit. Fortunately, my little friend “cat /dev/null > error_log” was my savior! Cleared out that bad boy in one hit and bought myself some time to fix the error reporting.

IMPORTANT NOTE: I understand it’s bad practice to turn off/ignore error warnings. I’ve seen plenty of condescending users on Stack Overflow throw that jab. The issue is the lack of time to fix ALL the errors (many rather useless) before the server crashes. So! If you need to buy time or just don’t have the threshold to weed out every single warning, then this post will help. I simply believe that silencing some overkill warnings is far better than corrupting your databases/crashing your hard drive. That’s just me. Onwards!

I likely could have updated the php.ini file, but that requires an Apache restart and I wasn’t sure I wanted to hide reporting from ALL sites on the server. I just know which sites are most clunky so I opted for code-based changes. Here’s the thing: anything on Stack Overflow made it sound like one simple code change would fix everything. What a thought! It got my hopes up until I realized it didn’t work haha (at least for me).

I kept seeing “all you need to do is put ‘error_reporting(E_ALL & ~E_STRICT & ~E_DEPRECATED);’ in your website/cake/bootstrap.php!”…sounded lovely and super easy, but I still saw craploads of errors in my logs even after clearing all forms of cache/tmp folders. No dice. Per usual, I assumed I wasn’t doing something right so I tried mannnnny variations of the reporting settings. Remove the ~, put ‘E_ALL ^ E_STRICT’ instead, change the order, etc. I even decided to move the error reporting changes from the bootstrap and put it into the core files instead. Nothing. After a while, I decided to just double up my efforts and put the logging changes in the core file AND bootstrap files. So aside from the bootstrap.php error_reporting line, I went to website/app/config/core.php and swapped the defaul error logging settings, “Configure::write(‘log’, true);”, with “Configure::write(‘log’, E_ALL & ~E_NOTICE & ~E_STRICT);” One last refresh of everything, cleared the log file one more time, and opened the server app while wincing.

…silence…

FINALLY. The logs ceased, my OS resumed normal speeds, and the server app wasn’t churning! Success! 24 hours later, any my error_log file is a mere 2MB with 3 of 9 sites updated with logging changes. Much better. Plenty of time to work on things!

So to conclude, here’s how you buy time with old CakePHP websites and PHP 5.4:

  1. Keep the “cat /dev/null > error_log” handy! You can always use this with ssh from a different computer if your computer starts to lag too much or crash. Just remember to be in that apache directory first, of course.
  2. In yourcakesite/cake/bootstrap.php, change existing error_reporting line to “error_reporting(E_ALL & ~E_STRICT & ~E_DEPRECATED);”
  3. In yourcakesite/app/config/core.php, change default “Configure::write(‘log’, true);” to “Configure::write(‘log’, E_ALL & ~E_NOTICE & ~E_STRICT);”
  4. Clear tmp cache files for kicks; never hurts.
  5. Enjoy a quieter log stream!

Not sure if anyone is carrying older sites forward like we are at this point, but hopefully this helps if so! Always something to learn when upgrading servers and sites.

Now for a calm Wednesday! Happy coding!

Back to Stack Dog

MySQL a.k.a. MyHEADACHE

Okay, I’m subjecting myself to developer shame and scrutiny, but I’m tired of hitting these little snarls. In other words, this post is more for myself than anyone else. Plus, the other point of this blog is to document my trials and solutions so brace yourself.

I’m simply trying to get a new installation of MySQL up and running on a Mavericks-based server. There are 20,398,430,284 (maybe fewer) articles regarding the process, but I somehow manage to slam into walls. Part of me believes it’s the fact that there ARE too many articles and it’s so easy to get on a wrong track vs. following one installation process. Either way, I think I have the combo that did it for me once and for all.

Credit to these folks:

Originally, I followed one guy’s take on it all and ended up with no mysql database and completely locked out of the root user (with or without password attempts; cute, I know). So! I decided to start over. But nooOOooo you need to COMPLETELY removed all MySQL fun from your drive before trying again. That uninstallation link above was the key to that. I missed a few files and especially the logged history notes. Time to put the Joe Schmoe blogs aside and stick to basics. (I know, I know — KISS). I stuck to the walkthrough suggestions and simply downloaded from MySQL dev site and followed those steps. To immediately avoid the root battles, I decided to try the “mysqladmin -u root -p ‘PASSWORD'” immediately. SEEMED to work from there, but I wasn’t convinced thanks to my earlier walls. I ran the mysql_secure_installation script in order verify my user settings and to get things locked down and cleaned up from the start. I was able to update the root password at this point if I’d like, which was NICE. This little process (much like CakePHP bake) helps you clean up your testing databases/usernames to prep for production. Given that I’m simply transferring servers, I’m all about getting the production replica in place and nixing the test gear. Sweet success!

Now I can finally put things back into place. Again, I’m fully aware that this isn’t something ‘hard’; it’s just finicky and my attention span can be rough. I found solace in the fact that most walkthroughs reference the notorious 2002 socket fix, which reminds me that there are still nuances in this whole MySQL process. I’m sure Oracle is battling to make it even tougher to fork out in the future, too. Apple always had MySQL in their Server Admin services list until the recent OS releases. I see a pattern! ::sigh:: Hello, MariaDB!

Well that’s all for now. Happy coding (now that you can)!

Back to Stack Dog