Category: electronics

  • SharePoint 2010 Synthetic File Data

    Still trying to work through creating synthetic data for an out-of-the-box SharePoint performance test.  To create the data, create a new site collection (so it doesn’t interfere with anything else and is easy to clean up), and uploads all the test data.  The biggest downside right now is that the data is created and then uploaded, which requires enough disk space to make the data.  Not a huge issue for me, but possibly for you.

    General idea came from a few places for the upload, and locally for the file creation.

    #USER Defined Variables
    #Specify the extension type of files you want uploaded
    $strDocTypes = @(".docx",".xlsx", ".pptx", ".pdf")
    #The max amount of data generated in MB
    $maxSize = 50
    #The max size one file could be in MB
    $maxFileSize = 10
    #Intermediate folder where the test data is placed
    $fileSource = "F:TestData"
    #New Content Database (for easy removal)
    $dbName = "Portal_ContentDB2"
    #New Site collection template
    $template = "SPSPORTAL#0"
    #Account owner
    $siteOwner = "TESTAdministrator"
    #Web Application address
    $webApp = "https://portal"
    #Site Collection Address
    $siteCollection = "/sites/1"
    # DO not edit anything beyond this line
    
    #Create all the test data using FSUTIL
    
    $rand = New-Object system.random
    do {
    	$guid = [guid]::NewGuid()
    	$guid =  $guid.ToString()
    	$fileName = $guid+$strDocTypes[$rand.next(0,$strDocTypes.length)]
    	$rand1 = $rand.nextdouble()
    	$rand2 = $rand.nextdouble()
    	$rand3 = $rand.nextdouble()
    	[int]$fileSize = 1048576*$rand1*$rand2*$rand3*$maxFileSize
    	FSUTIL FILE CREATENEW $fileName $fileSize
    	$fileTotalBytes = $fileTotalBytes + $fileSize
    	$fileTotal = $fileTotalBytes/1024/1024
    }
    #Data generation keeps going until the amount of data is > $maxSize
    while ($fileTotal -le $maxSize)
    
    #Creation of the new content database and site collection
    $siteCollectionURL = $webApp + $siteCollection
    New-SPContentDatabase $dbName -WebApplication $webApp
    New-SPSite -url $siteCollectionURL -OwnerAlias $siteOwner -Name "Test Doc Library" -Template $template -ContentDatabase $dbName
    
    #uploading of all the generated data into the $siteCollectionURL/Documents folder
    $spWeb = Get-SPWeb -Identity $siteCollectionURL
    $listTemplate = [Microsoft.SharePoint.SPListTemplateType]::DocumentLibrary
    $spFolder = $spWeb.GetFolder("Documents")
    $spFileCollection = $spFolder.Files
    Get-ChildItem $fileSource | ForEach {
    	$spFileCollection.Add("Documents/$($_.Name)",$_.OpenRead(),$true)
    }
  • SharePoint 2010 Load Testing Kit

    Was looking for ways to generate synthetic test data for a SharePoint out-of-the-box install today, and ran into the SharePoint 2010 Load Testing Kit.  While it doesn’t help me in this stage of the project, I could see it being useful later or on other projects.

    There appears to be a lot of dependencies though:

    • Migration from 2007 to 2010
    • As it collects info from your log files, you’ll need to have everything migrated for the scripts to work
      • Data
      • Apps
      • Site Collections
      • Etc.

    Could be hot though!

  • Migration to WordPress Network Part 3

    I haven’t talked about this in awhile, but everything has been running smoothly. Having only two instances I need to worry about is definitely better than the 5+.

    However, today, I wanted to add a subdomain to a domain that is hosted in the WordPress Network. It took a few minutes to remember what I had done (thankfully all the articles I already read helped), but a few minutes later I had a subdomain running.

    Essentially it is the same setup as before:

    1. Create the website in the WordPress Network Admin site (i.e. subdomainA.rebelpeon.com)
    2. Create the subdomain mirror entry in the Dreamhost panel under your main WordPress Network domain (i.e. subdomainA.rebelpeon.com)
    3. Create the subdomain mirror entry in the Dreamhost panel for the site you want (i.e. subdomainA.displaydomain.com)
    4. Add in the domain mapping
    5. Celebrate!
  • Search Schedule Script

    To setup the crawl configuration for the default local sites, you can use the script below:

    $ssaName="Search Service Application"
    $context=[Microsoft.Office.Server.Search.Administration.SearchContext]::GetContext($ssaName)
    
    $incremental=New-Object Microsoft.Office.Server.Search.Administration.DailySchedule($context)
    $incremental.BeginDay="23"
    $incremental.BeginMonth="10"
    $incremental.BeginYear="2011"
    $incremental.StartHour="0"
    $incremental.StartMinute="00"
    $incremental.DaysInterval="1"
    $incremental.RepeatInterval="720"
    $incremental.RepeatDuration="1440"
    
    $fullCrawl=New-Object Microsoft.Office.Server.Search.Administration.WeeklySchedule($context)
    $fullCrawl.BeginDay="23"
    $fullCrawl.BeginMonth="10"
    $fullCrawl.BeginYear="2011"
    $fullCrawl.StartHour="6"
    $fullCrawl.StartMinute="00"
    $fullCrawl.WeeksInterval="1"
    $contentsource = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $ssaName -Identity "Local SharePoint Sites"
    
    $contentsource.IncrementalCrawlSchedule=$incremental
    $contentsource.FullCrawlSchedule=$fullCrawl
    $contentsource.Update()
    
  • SQL Server Issues

    Last week I was beating my head against the table, because a VM I had quickly created wasn’t allowing SQL to install.  I kept receiving the following error in the detailed SQL error log:

    Configuration action failed for feature SQL_Engine_Core_Inst during timing ConfigRC and scenario ConfigRC.
    External component has thrown an exception.
    The configuration failure category of current exception is ConfigurationFailure
    Configuration action failed for feature SQL_Engine_Core_Inst during timing ConfigRC and scenario ConfigRC.
    System.Runtime.InteropServices.SEHException: External component has thrown an exception.
    at Microsoft.Win32.SafeNativeMethods.CloseHandle(IntPtr handle)
    at System.Runtime.InteropServices.SafeHandle.InternalDispose()
    at System.Runtime.InteropServices.SafeHandle.Dispose(Boolean disposing)
    at System.Diagnostics.Process.Close()
    at System.Diagnostics.Process.Dispose(Boolean disposing)
    at System.ComponentModel.Component.Dispose()
    at Microsoft.SqlServer.Configuration.SqlEngine.SqlServerServiceBase.WaitSqlServerStart(Process processSql)
    at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineDBStartConfig.ConfigSQLServerSystemDatabases(EffectiveProperties properties, Boolean isConfiguringTemplateDBs, Boolean useInstallInputs)
    at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineDBStartConfig.DoCommonDBStartConfig(ConfigActionTiming timing)
    at Microsoft.SqlServer.Configuration.SqlConfigBase.SlpConfigAction.ExecuteAction(String actionId)
    at Microsoft.SqlServer.Configuration.SqlConfigBase.SlpConfigAction.Execute(String actionId, TextWriter errorStream)
    Exception: System.Runtime.InteropServices.SEHException.
    Source: System.
    Message: External component has thrown an exception.

    It turns out that I accidentally downloaded the debug check build version of Windows 2008 R2 SP1, and well, you can’t install SQL with that version.  Needless to say, the error message makes this completely obvious.  Found the hint to look at the ISO I was using on MSDN social.

  • Migration to WordPress Network Part 2

    I had two outstanding items to figure out before migrating my last site.  Today I was able to knock off one.

    My Director installation used to be a directory under my website.  Unfortunately, with migrating to a wordpress network, that wouldn’t work.  This is because everything is done via DNS redirection and so a directory doesn’t physically sit where you think it does.  I can only think of the nightmares it could cause.

    Instead, I moved it to a sub domain.  This seemed to fix all the issues, and it is actually pretty nice there.  I just had to update a few links on various pages, make a few php.ini updates, and all was well.

    Now it’s on to the massive site.  I think I have to import only a few records at a time.  Turns out, with all the media attached to each post, it kept timing out.

    Update: Well, that was a fun experiment.  Since I can’t seem to upload any of my previous entries (Dreamhost kills the script), I’ve decided things are working ok with two WordPress installs, and that’s how it will stay.  The other site is huge anyways, so it makes sense…

  • Net.TCP, IIS7, and Classic AppPools

    With the application that I’m helping “make fast” one of the optimizations identified by an architect was to use net.pipe or net.tcp on the application tier.  This was because the services on that tier call into each other, and waste a lot of time doing all the encapsulation that comes with wsHttpBinding.

    We had tried to first use net.pipe because it is incredibly fast and it is all local.  However, because of how they do security here, it didn’t work.  Next up, net.tcp.

    Overall, it wasn’t that difficult to setup the services with dual bindings (wsHttpBinding for calls from other servers and net.tcp for calls originating from the same server).  Granted, there were a lot of changes, and since all the configs are manually done here, was very error-prone.  I will be glad not to do any detailed configurations for a while.

    Anyways, we did our testing of the build that incorporated it through 3 different environments, and over the weekend it went into Production.  Of course, that is when the issues started.

    Bright and early Monday morning, users were presented with a nice 500 error after they logged into the application.  On the App tier servers we were getting the following error in the Application event log with every request to the services:

    Log Name:      Application
    Source:        ASP.NET 4.0.30319.0
    Event ID:      1088
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Description:
    Failed to execute request because the App-Domain could not be created. Error: 0x8000ffff Catastrophic failure

    Well, since it mentioned the App Domain, we ran an IISReset and all was well.  I didn’t think too much into it at the time and only did some cursory searching as it was the first time we saw it.  However, today it happened again.

    Our app is consistently used between 7AM and about 7PM, but during the night it isn’t used at all.  This is when the appPool is scheduled to be recycled (3AM).  It had appeared as if the recycle was what was killing us, as only the App Domain is recycled and not the complete worker process.

    Immediately we had the guys here remove the nightly recycle and change it to a full IISReset.  At least that way we had a workaround until we could determine the actual root cause, come up with a fix, and test said fix.  However, it didn’t take long to determine the actual root cause…

    One of the interesting things about this issue, was after it started happening in Production, a few of the other environments started exhibiting similar symptoms: Prod-Like and a smaller, sandbox environment.  Mind you, neither of these environments had these issues during testing.

    So, I took some time to dive in and actually figure out the problem.  At first I thought it was because there were some metabase misconfigurations in these environments.  I wouldn’t say that all of these environments were pristine, nor consistent between each other.  I found a few things, but nothing really stood out…until…

    While I was doing diffs against the various applicationHost.config files, NotePad++ told me it had been edited and needed to be refreshed (them removing the appPool recycles).  However, as soon as this happened the 500 errors started.  It didn’t help they did both machines at the same time which took down the whole application, but that’s another story.

    This led me to believe that it wasn’t something within the configuration, but it also showed me how to reproduce it, at least part way.  The part that I was missing was that I had to first hit the website to invoke the services and then change the configs causing an app domain recycle.

    Then I attempted to connect to the worker process with windbg to see what it was doing.  However, that was a complete failure as nothing actually happened to the process.  No exceptions being thrown, no threads stuck, etc.  It appeared to just sit there.

    A bit of searching later led me to an article that had the exact same issues we were having, and that changing from a Classic to Integrated appPool fixed it.  However, it didn’t mention why.  Of course I tried it and it worked.  To appease the customer’s inquiries I knew I needed to find out why though.

    I still don’t have a great solution, but apparently net.tcp and WAS activation has to be done in Integrated mode.  If it isn’t, you get the 500 error.  But ours works fine until the app domain is recycled.  Well, according to SantoshOnline, “if you are using netTcpBinding along with wsHttpBinding on IIS7 with application pool running in Classic Mode, you may notice the ‘Server Application Unavailable’ errors. This happens if the first request to the application pool is served for a request coming over netTcpBinding. However if the first request for the application pool comes for an http resource served by .net framework, you will not notice this issue.”

    That would’ve been nice to know from Microsoft’s article on it, or at least a few more details.  I remember reading an article about the differences between Integrated and Classic, but I sure don’t remember anything specific to this.

    Anyways, hope this helps someone who runs into the same issue…

  • Windows 2008 Performance Alerts

    This may seem silly to some of you, but I am still getting used to Windows 2008.  Sadly, I don’t spend as much time actually administering servers as I used to (silly management), so it usually takes me a bit longer to make my way around 2008 than 2003.  I like to think they made everything more complex, but for some reason I’m sure I’ll get booed about that.

    Anyways, this morning I was attempting to setup some performance alerts on some servers we’re having issues with.  Basically I wanted to have it email us when it reached a certain threshold.  No big deal, thinking I had this, I created the email app, created a performance counter, and then manually added it in.

    Needless to say that didn’t work.  It took me awhile to figure out why too as my little email utility worked fine.  So I began a new search in order to find out how stupid I was being.

    Turns out, quite a lot of stupid.  Instead of using the utility, you can now use scheduled task items…which includes an email action!  I basically used the instructions over at Sam Martin’s blog, which, I may add, he posted about in April of this year.  I’m not the only n00b.  Plus, who doesn’t have an enterprise system that deals with this sort of stuff already (at least at the types of clients I work with)?

    Perfmon

    1. Open up perfmon
    2. Create a new User Defined Data Collector Set
    3. Choose to create manually after naming
    4. Select Performance Counter Alert
    5. Add in the performance counter you care about (mine was requests executing in asp.net apps 4.0)
    6. Choose the user to run it as
    7. Edit the Data Collector in the Data collector set
    8. Change the sample interval to whatever works for you (I set mine to 60s so we can be on top of issues prior to the users)
    9. Under Alert task, give it a name (e.g. EmailAlert) and give it a task argument (you can combine them to form a sentence like “the value at {date} was {value}”
    10. Start the Data Collector Set
    Schedule Tasks
    1. Open up scheduled tasks
    2. Create a task, not a basic task
    3. Name it the exact same name you did in step 9 above (i.e. EmailAlert)
    4. Check “user is logged in or not” so that it runs all the time
    5. Create a new email action under the Action tab
    6. Enter all the info for from, to, subject, etc.  To send to multiple people, comma separate the addresses.
    7. For the body, type whatever you want, and then $(Arg0) will pass the task argument you made in step 9 above.
    8. Enter the SMTP server.

    Done!

    Since the performance counter was set to an application pool, whenever that pool disappears (IISReset, idle timeout, etc.) the counter stops.

    Currently Reading (could take awhile): [amazon_image id=”B000QCS8TW” link=”true” target=”_blank” size=”medium” ]A Game of Thrones: A Song of Ice and Fire: Book One[/amazon_image]

  • Dummy Files

    We are doing some document uploading to SharePoint, and needed some test files of various sizes.  If you have Visual Studio installed, you have the tools required to make these files.  Just make sure you run as administrator, and use the following command.

    FSUTIL FILE CREATENEW 100MBTest.mdb 104857600
    
    Usage: FSUTIL FILE CREATENEW [Filename] [Size in bytes]

  • Upgrade K2 Workflow Instances to a Specific Workflow Version

    We were having a specific issue in our Dev and QA environments where K2 was consuming over 16GB of disk space, and subsequently causing our server to run out of disk space.  We had an interim workaround of restarting the K2 service, but within a day of testing, it was possible that K2 would eat it all up again.

    This was happening because workflow instances (cases in our example) are tied to specific versions of the workflow.  Similar to .NET websites, in order for you to use a specific version it has to do a lot of pre-compiling.  Now, I’m not sure why it was using so much disk space per version, but that is essentially what was causing all our issues.

    There are a few things that could’ve made this better:

    1. Testers and Developers not using old cases which are tied to older versions
    2. Building our K2 workflows in Release instead of Debug

    Turns out option #2 reduces the space an individual version uses by orders of magnitude.  Sadly, there is no way to retrofit the processes that are actually already in K2.

    The actual solution is to use some of the new APIs, specifically the Live Instance Management APIs (oh and that took awhile to find via searching).  The downside is that these APIs were added in 4.5, so anyone on a version prior to that is screwed.  Thankfully we were on 4.5.1!

    Anyways, if you’re lazy, there is actually an already created utility on K2 Underground, and you can find out some additional info about it too.

    Just be prepared for it not to work all that great.  We received a ton of Null Reference errors while running the utility against our large database.  It seemed to work fine in our POC, but not against the real thing.  Some cases were changed, but not all, and we still had the same issue.

    In the end, we had to manually go and delete the old cases in K2, which is definitely not supported.  However, our app handles it gracefully, so it wasn’t a huge deal.