Thursday, 25 October 2007

Sharepoint 2007 (MOSS) – List “My” Sites

So you want to have a web part which lists the sites which "I" have access to… easier said than done.

Below is my code for doing this… except it doesn't work. Well, it works for me because I'm an administrator but not for other users. It iterates through all site collections in the farm and and does GetSubWebsforCurrentUser. Which is fine, because I have permissions to read the properties of all site collections – a normal user of course, does not!

I tried using elevated authority and all sorts of tricks but so far haven't been able to finagle it. I do have one idea… that is to drop all the info I need into a string array using elevated authority and then do GetSubWebsforCurrentUser on site collections where no errors are raised for the authenticated user. I haven't had a spare second to try that out yet but I will…

So for the benefit of others – and hopefully someone else will have a brainwave in the meantime – here is the errant code in question…

using System;
using System.Text;
using System.Runtime.InteropServices;
using System.Web.UI;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Serialization;
using System.Web;

using Microsoft.SharePoint;
using Microsoft.SharePoint.WebControls;
using Microsoft.SharePoint.WebPartPages;
using Microsoft.SharePoint.Utilities;
using Microsoft.SharePoint.Administration;
using Microsoft.SharePoint.Publishing;

namespace CSR.Sharepoint.WebParts

{

[Guid("0d546513-7447-4211-8fc9-75117af712b0")]

public class MyCollaborationSites : System.Web.UI.WebControls.WebParts.WebPart

{

public MyCollaborationSites()
{
this.ExportMode = WebPartExportMode.All;
}

protected override void Render(HtmlTextWriter writer)
{
SPWebApplication oWebApp = SPContext.Current.Site.WebApplication;
SPSiteCollection oSites = oWebApp.Sites;

foreach (SPSite site in oSites)
{
SPWeb oWeb = site.RootWeb;
int ilevel = 1;
if (!PublishingSite.IsPublishingSite(site))//don't include portal
{
GetSubSites(ilevel, oWeb, writer);
};
};

}

private void GetSubSites(int ilevel, SPWeb oWeb, HtmlTextWriter writer)
{
Boolean alternate = false;
foreach (SPWeb oSPWeb in oWeb.GetSubwebsForCurrentUser())
{

if (ilevel != 1)
{
if (alternate == true)
{
writer.Write("<tr><td class=csr-RowGrey><a href='" + SPEncode.HtmlEncode(oSPWeb.Url) + "' target='" + SPEncode.HtmlEncode(oSPWeb.Title.Replace(" ", "")) + "'>" + SPEncode.HtmlEncode(oSPWeb.Title) + "</a></td></td>");
}
else
{
writer.Write("<tr><td class=csr-Rowwhite><a href='" + SPEncode.HtmlEncode(oSPWeb.Url) + "' target='" + SPEncode.HtmlEncode(oSPWeb.Title.Replace(" ", "")) + "'>" + SPEncode.HtmlEncode(oSPWeb.Title) + "</a></td></td>");
alternate = true;
}
}

if (oSPWeb.GetSubwebsForCurrentUser().Count > 0)
{
ilevel = ilevel + 1;
GetSubSites(ilevel, oSPWeb, writer);
}
}
}
}
}


Thursday, 11 October 2007

Sharepoint 2007 (MOSS) - Unpublishing/Expiring a Site

It's not immediately obvious how to expire a site… particularly for used to SPS 2003. In 2003, you could just set the end date to sometime in the past or if you really wanted to, the start date to some time in the far off future. Unfortunately with MOSS it's a little more complicated.


You'd think that the "Unpublish" option on the Workflow menu of the Page Editing Toolbar would do the trick but sadly, no! What it does do is roll the page back to it's previous version. Definitely handy. But what if you want the page (which has several versions) to be entirely invisible to users? In theory, you could delete it… but I ask you, how many of your content managers will want to delete page just because it's expired now? None I'd wager!

So what's the answer? Page Scheduling (also referred to as Publication Dates). You set a start and end date for publication and the page is only visible during that period. The period may be from "Immediately" to "Never" so you don't have to think about it to much. Easy huh? Yeah, well….

Every site CAN have publication dates however not every site DOES have them. It has to be enabled. To enable Page Scheduling do the following:


  • Go to the site in question
  • Go to Site Actions > Site Settings > Modify Pages Library Settings

    • From Versioning Settings, ensure Versioning and Content Approval are both enabled
    • From Manage Item Scheduling, ensure Enable Scheduling is ticked

Unfortunately no, it does not apply to subsites of the current site. For the administrators who want to know why - because what you are effectively doing is amending the properties of the Pages list in the site content for that site.


So still you ask, how do I expire the page???? Ok…

  1. Show the Page Editing toolbar
  2. In the status section of the toolbar, it will have the version, the status and the Publication date. Click on the date (it's a hyperlink)
  3. The page scheduling options will open
  4. Set the end date to the desired date/time
  5. Hit OK and from the Page Editing Toolbar hit Publish

Again, seemingly simple…. unless you're changing pages migrated from SPS 2003. Mostly I've found the Start and end dates are blank. You can't set the start date or the end date to some time in the past - you will get an error message telling you so. It's a slight irritation sure but is it really necessary? What's the big deal already… just let me set the date to a point in the past! So you can set the start date to 'Immediately' and the End date to 5 minutes from now and once you hit Publish and those 5 minutes have past, the page will magically no longer be visible.



Wednesday, 10 October 2007

SQL CLR .Net Function - Split

Having needed a split function many times in the past, I've always utilised patindex and done it the hard way…. And when it comes to nvarchar values, forget about it! So today I took a few minutes to write a SQL CLR function to do it.

The function takes a string and a delimiter and returns a table of string values.

Here are the instructions for those who may want to do the same thing…

  1. Create a Visual Studio project of the type Database Projects \ SQL Server\ SQL-CLR \ C# SQL Server Project – I called it udfclrSplitString but you know, call it whatever you like!
  2. Add the following code:

    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Data.SqlTypes;
    using Microsoft.SqlServer.Server;
    using System.Collections;

    public class CSRSplitString
    {
    [SqlFunction(FillRowMethodName = "FillRow")]

    public static IEnumerable udfclrSplitString(SqlString InputString,SqlString Delimiter)

    {

    return
    InputString.Value.Split(Delimiter.Value.ToCharArray(0, 1));
    }

    public
    static void FillRow(object row, out string InputString)
    {
    InputString = (string)row;
    InputString = InputString.Trim();
    }
    }

  3. Build that sucker
  4. Drop the dll onto the SQL server somewhere relevant
  5. If CLR is not already enabled on the server, execute the TSQL code:

    sp_configure 'clr enabled', 1
    GO
    RECONFIGURE
    GO

  6. In the desired database, execute the following TSQL code:

    CREATE ASSEMBLY CSRSplitString
    FROM 'D:\<YourDLLName>.dll'
    WITH PERMISSION_SET = SAFE

  7. In the desired database, execute the following TSQL code:

    CREATE FUNCTION udfclrSplitString
    (@String nvarchar(4000),@Delimiter nvarchar(1
    ))
    RETURNS
    TABLE (ArrayItem nvarchar(4000
    ))
    AS EXTERNAL NAME CSRSplitString.CSRSplitString.udfclrSplitString;

    Where external name is MyDotNETAssembly.MyClassname.Method

  8. To test it, try:

    select *
    from udfclrSplitString('Kristen,Patrick,Julian,Werner,Koe',','
    )

    You should get 5 rows back …

  9. If you want to test it using a table column as the input try something like:

    Select
    a.name as DbNameOriginal,s.*
    from
    msdb.sys.databases a cross
    apply udfclrSplitString(a.name,'_') s
    where a.name like
    '%_%'

    You will need to have the function in the msdb database for this example to work

  10. To make this function automatically available in all new databases created on that server, execute steps 6 + 7 in the model system database.

Tuesday, 9 October 2007

Sharepoint MOSS 2007 - MySite Default Locale

The default locale setting in the MySite template is English-US. To amend this, the simplest approach is:

  1. Navigate to http///MySite/_layouts/regionalsetng.aspx *
  2. Update the Locale as desired and hit OK

    * this assumes your MySite host is at /MySite so adjust as necessary

As to sites which have already been created, this won't fix them. The bottom line is that there is NO supported global fix according to Microsoft as it is by design.

As a bodgy workaround you could update every site in the webs table of your _Content database (and limit it to where fullurl like '%personal%'. Simply do an update on the Locale column setting it to the Id of the locale you want. Clearly this is NOT a recommended workaround and may well break any support agreements you have with Microsoft.

Monday, 8 October 2007

Sharepoint MOSS 2007 - Working with the BDC (Business Data Catalog)

Having just started working with the BDC, here's what I've encountered so far.



Using BDCMetaMan


On advice from others I started using this tool however I found it to be a tad unwieldy in that I couldn't work out how to do everything I wanted – for example, use a Stored Proc as my source.

So I started with BDC MetaMan, took the output it gave me and then just screwed around with it to make it work with stored procs. In doing so, I found that writing application definitions is nowhere as complicated as others had lead me to believe. That said, it does make for an easy start point where I could just replace values and avoid the tedium of building up all the XML tags in the right order and syntax and so on…

Application Definition

The basic structure is this:

How to use SQL Authentication

Thanks to Nick Kellet for doing the hard slog and solving this problem….
http://planetmoss.blogspot.com/2006/11/using-single-signon-with-database.html



The basic steps are:

  1. Enable the Microsoft Single-sign-on service using a domain account (in windows Services)

  2. Create an SSO Application
    Central Admin > Operations > Security Configuration > Manage Single Sign-on > Manage Server Settings
    Enter Display Name, Application Name (note: this field is what you will enter into your application definition in the LOB instance properties) and Email Address. They don't have to be anything in particular. Ensure the Account Type is Group.


  3. Associate the SSO Application with the domain users group or some other desired group
    Central Admin > Operations > Security Configuration > Manage Single Sign-on > Manage Account Information. Select the SSO application and type in the "<domain>\domain users" group. Then key in the SQL username and password. Give the application SELECT and EXECUTE rights (assuming you are using Stored Procedures).


  4. Configure your connection string in the application definition
    <Properties>
    <
    Property

    Name="AuthenticationMode"

    Type="System.String">RdbCredentials</Property
    >
    <
    Property

    Name="DatabaseAccessProvider"

    Type="System.String">SqlServer</Property
    >
    <
    Property

    Name="RdbConnection Data Source"

    Type="System.String">SERVER NAME</Property
    >
    <
    Property

    Name="RdbConnection Initial Catalog"

    Type="System.String">DATABASE NAME</Property
    >
    <
    Property

    Name="RdbConnection Integrated Security"

    Type="System.String">false</Property
    >
    <
    Property

    Name="RdbConnection Pooling"

    Type="System.String">true</Property
    >
    <
    Property

    Name="SsoApplicationId"

    Type="System.String">SSO APPLICATION NAME</Property
    >
    <
    Property

    Name="SsoProviderImplementation"

    Type="System.String">Microsoft.SharePoint.Portal.SingleSignon.SpsSsoProvider, Microsoft.SharePoint.Portal.SingleSignon, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Property
    >
    </
    Properties>



  5. Import the application definition
    Central Admin > SSP > Business Data Catalog > Import Application Definition


  6. Configure permissions for the application definition
    Central Admin > SSP > Business Data Catalog > Permissions > Copy all to descendants (or apply the entity you want to set if not inheritable)


  7. Add a BDC List web part to a page, configure it and fingers crossed, it will work!

One BIG caveat… getting this to work across multiple domains is going to be a problem. For now, I’m ignoring that issue and hoping it doesn’t rear it’s VERY ugly head…


How to use Windows Authentication

Haven’t tried it so I don’t know. But… if you are having problems my first suggestion would be to consider the Kerberos double-hop issue. If you don’t know what that is, google it… it’s a reasonably simple fix, if you are friendly with your AD administrator(s)!


Detailed Application Definition

It’s fairly self-explanatory. The bits in green are the bits you have to fill in.

Notes:
· The available data types are your standard .Net data types.
· Refer
http://msdn2.microsoft.com/en-us/library/ms546541.aspx for a BDC overviewBelow is my first pass attempt so caveat emptor… it ain’t perfect folks! Feel free to add your comments with anything you can see that can be done better or anything that I’ve got wrong!


What’s Next - Got any ideas?

So next, I’ve been charged to come up with a BDC-based list where some manual info is also stored – bodgy explanation but allow me to demonstrate.


BDC list contains 20 columns. To this list, we want to add some manually entered data. The list will contain around 2000 records. The BDC data will be current and the data editor will update the manual fields when time permits. The web part should output like:

RequestNo Title DueDate Contact Type Owner OutageRequired OutageNotification

Where OutageRequired and OutageNotification are sourced from manual data and the rest are from the BDC.

So my initial thought was to have a BDC lookup in my manual list where the RequestNo, OutageRequired and OutageNotification are entered. I really hate this idea. It’s ugly and impractical. It means that the user has to know the Request number upfront rather than being able to immediately see which request numbers haven’t been manually updated because the 2 fields are empty . Haven’t had a second to think it through properly yet but if anyone has a better idea please speak up!

Wednesday, 3 October 2007

The SQL Troubleshooting Checklist

read this handy article from SQL Server magazine and thought I'd share it with everyone... a good one to stick on the pinboard!

The Troubleshooting List

By Andy Warren, 2007/10/03
Total article views: 177 Views in the last 30 days: 162

Most DBA's grow into their jobs and part of that process is the baptism by fire that occurs when things go wrong, especially on the performance side of things. I started to make this performance centric, but so many of the steps and thoughts are common with other problems that I tried to make it a little more generic. In this article I've listed some tips and thoughts about what to do - and why - when the sudden performance dip strikes or other serious problem occurs. It's probably not a perfect list, but the key is to build a process that keeps you on track.

#1 - It's Usually Blocking
When I get a call that 'everyone is locked up', 'everyone is slow', etc, the first thing I look for is blocking. Various ways to check, but the tried and true technique is to execute 'select * from master..sysprocesses where blocked > 0'. Works on SQL 7 forward and takes about 15 seconds to type in and run, then you can decide what action to take. This usually occurs because someone is doing something bad like bulk loading records during the day. You can find and kill the blocking spid, but it's worth looking to see what the spid is doing first. If it's been running for three hours you might be better off to let it finish rather than wait for it to roll back.

#2 - It's Not Blocking, Is It Limited to One Application/Database/Physical Location?
I'm usually asking this question while I'm headed to a machine to work on step #1. If it's NONE of those things I can probably rule out recent application deployments, recent proc changes, network problems to a WAN site. If it is one physical location, I immediately get an email out to the network team to have them check all their hardware. Sometimes a router will blip, or maybe they just rebooted it and forgot to tell everyone. In practice it's usually not the network, but sometimes it is! If it's just one application and/or database, I'm immediately thinking 'what has changed'? If you know how to do wait state analysis this is a good technique to think about applying here but it takes time and you can often shortcut with the right questions.

#3 - If it's One Application, Can They Reproduce the Issue?
This is key. If they can show you it's always slow when they add a customer, look up an order, etc, you can quickly zoom in by profiling and filtering to a single host name. That will let you see exactly what is being executed and look for something with a long duration - long being subjective, but anything close to 30 seconds is a good candidate for any immediate look and anything on down to 5 seconds might need to be checked. Ideally here you can refer back to a baseline so you know what those procedures usually cost. Usually application problems can be traced back to either a change (app code or database), or SQL has generated a new plan for a query that isn't performing well, or both. In the worst case get a developer to step through the code against the production database because it's not always a SQL problem. Check whatever error logging system the application uses and the event log of the machine they are using it on.

#4 - It's One Application, But They Cannot Reproduce on Demand
These can be miserable to solve. I usually start with running separate profiler sessions on a couple users I can trust to call or email when the problem happens and to be able to tell me a lot about what they were doing at the time. I've seen some of these solved by the low tech technique of using a camcorder to capture everything that happened, more recently I've used Morae to let me go back in time to see what happened. Both of those are more often needed for code issues rather than SQL issues, but sometimes you're just looking for clues. If you can get the users to log when it happens you can try to match up in Profiler, the hard part is the query plans may be fine but they are being blocked or bogged down by something happening elsewhere on the server. So you have to have a deep focus on the user AND be looking at the server holistically.

#5 - Sometimes it's the data
This might happen because a lot of data has been added and the statistics don't match the current state of the data, or just because in some cases there is a lot more data than the developer expected. I dealt with a problem like this once where IT users had not flipped a bit in a table that allowed users to cache the results of an expensive query on the client, as a result they kept running that query over and over and over, killing performance. Flipped the bit and magically all was well!

#6 - Rarely Is it the Hardware
Perhaps most common in this category is a drive in a RAID set failing and no one noticing, resulting in a performance decrease because parity is being used to restore the missing data on the fly. I've also seen more than once incidents where 'the server is down' but someone doing work in the server room unplugged/knocked loose the wrong power/network cable. For the most part the hardware will either work or it won't. Doesn't hurt to have some check, but unless the server is actually unavailable this should be further down the list.

#7 - It's Usually Because Someone Changed Something!
I bet we all have had a few of these happen. Having a good change control process that is logged can really help when you're under stress. Something as minor as deploying a proc change where they added a column or order by can have a huge performance impact if the query is no longer covered. Sometimes the rules change after a hotfix (less often) or a service pack (more often) and while hopefully the changes are good, sometimes it isn't. Larger companies may be applying patches on a regular basis and after a while you start to ignore them, until something goes wrong. I'll also add a plug here for rebooting one extra time after applying a service pack because I want to be absolutely sure there are no pending changes waiting to be made live on the next reboot which might be months away - who would think of the last service pack as being an issue when it was applied three months ago?

#8 - It's Incredibly Important to Communicate
As soon as you know an incident is going to last more than few minutes someone needs to get the word to operations. Work may need to be redirected to other sites, hourly employees put on break or sent for an hour of required but not all that important training, or in worse cases sent home for the day. Even smaller companies may have hundreds of employees/customers affected and the cost per hour of down time can be significant. Tell operations what you know via your boss, let them decide. It's also important to immediately communicate problems to everyone in IT. A lot of times you'll get an immediate reply back from someone who was bulk loading data, rebooting a switch, etc, that will immediately explain the problem. You also will need people to help you chase down leads so that you can stay focused, and someone to make sure you get something to eat and drink while you work.

#9 - Know when to ask for Help
I think the more experienced you are the easier this is to do. If you're a brand new DBA you hate to get the credit card from your boss to call PSS, or ask for a consultant to come, because you don't want to look inept. Absolutely there is a risk to that, but on the other hand, anything beyond a few hours and the cost of either of those pales in comparison. Try setting the stage before the incident by having one call to PSS pre-approved and consider establishing a relationship with a SQL consultant just in case. And it doesn't have to just be PSS, lot's of good challenging problems get posted in the forums here and on MSDN every day and the people that answer them like a good challenge and the learning it brings. Lean on your peers too, whether within the company or someone you met at the local user group. With experience you get more comfortable with the idea that you know what you need to know, you know what you don't know, and that it is not a sign of weakness to ask for help. Sometimes easier to say than to do!

#10 - Learn Your Lesson
I bet easily 8 out of 10 times when I'm finally done resolving a performance problem that I can look back and see that I didn't ask the right question, didn't really listen to an answer, took off following the first lead rather than being methodical, or any of a bunch of other things I should have done - in hindsight. This isn't about beating yourself up or holding yourself to some incredibly high standard, but rather stepping past our pride enough to really learn a lesson.

#Extra Credit - The Challenge
It's been a while since one was posted, but Steve and I had a lot of fun writing about many of the near disasters we've been involved in, often of our making. So my challenge to you is post your own performance issue checklist, or tell us about the problem you should have solved sooner and why. Yeah, we may smile a bit when we read them, but I bet more of us smile because we've done the same thing, or because we know that's one trap we won't fall in to. As always I hope you've learned something or sparked an idea or two, and look forward to an interesting discussion!

Thursday, 20 September 2007

Sharepoint 2007 (MOSS) Memory Errors – EventID 6482/6398/7076 - Attempted to read or write protected memory

Sharepoint 2007 (MOSS) Memory Errors – EventID 6482/6398/7076

If you are seeing the following error in your event log: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt"….

…Microsoft do have a Hotfix for this problem – however you will have to contact Microsoft Premier Support to get a copy. Unfortunately if you do not have Support, I can't give you a copy of our hotfix as it is password protected and the password is only valid for 7 days!

The patch in question is a .Net 2.0 hotfix titled 303311_intl_x64_zip.exe for 64bit servers and 303312_intl_i386_zip.exe for 32bit.

Wednesday, 12 September 2007

BSM 2005 & MOSS x64 don’t play well together….

Here's what you have….

Machine1: Windows 2003 x64 with MOSS 2007 x64
Machine2: Windows 2003 x32 with BSM 2005

Here's where you want to end up:

Machine1: Windows 2003 x64 with MOSS 2007 x64 and BSM 2005 WebPart (only)
Machine2: Windows 2003 x32 with BSM 2005

The article How to switch between the 32-bit versions of ASP.NET 1.1 and the 64-bit version of ASP.NET 2.0 on a 64-bit version of Windows implies that BSM 2005 can't be used in a x64 MOSS environment because the BSM webparts require ASP.Net 1.1 which is 32bit only and IIS can't have BOTH 32bit and 64bit websites running at the same time.

This is shown when attempting to install the BSM web parts on the MOSS server.

  1. On the MOSS server, run the server.msi from the BSM compatibility pack
  2. Select a custom installation > Web Part components only
  3. It fails because .NetFramework 1.1 is not installed
  4. Install .NetFramework 1.1 on the MOSS Server
  5. Execute cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1
  6. .Net 1.1 Web Service Extension is set to allowed.

Outcomes:

  • MOSS site doesn't work - error is "Service Unavailable"
  • Compatibility pack (web parts only) still fails because WSS 2.0 (or SPS 2003) is not installed

Question

Can BSM web parts be installed in a MOSS x64 environment? If so, how??

This is the question I have posed to Microsoft Premier Support, it looks like the answer is no but I want confirmation from MS before giving up!

**************** UPDATE FROM MICROSOFT ****************

From: Biao Zhang Sent: 2007年9月13日 14:04To: 'Hodges, Kristen'Subject: RE: SRZ070912000200:BSM2005: Can BSM web parts be installed in a MOSS x64 environment? If so, how??Importance: High

Dear Kristen,

I understand this question is very urgent and I have made some research, here is the update:

1.Business Scorecard Manager Server 2005 does not have 64-bit version and officially, Business Scorecard Manager Server 2005 (including web parts) does not support Windows 64-bit platform.

Tuesday, 21 August 2007

Sharepoint 2007 (MOSS) Upgrade - TempDb issues

In the middle of a Sharepoint 2007 (MOSS) upgrade....

One key piece of advice I can give - allow TempDb to have PLENTY of space when doing your initial full crawl.... in fact you may find that the % increase approach may not work during this process because it can't get enough space to fulfil it's very hungry needs! A Mb increase approach should work fine.

You may see errors such as "The transaction log for database 'tempdb' is full."

During later incremental crawls it shouldn't need any special handling, this just seems to apply to the intial full crawl.

Monday, 25 June 2007

Visual Studio Db Pro (DataDude) Headaches

So, no question, Db Pro is a great tool. However, it is not without it's headaches.

When working with TFS and multiple team members, we have had no end of synching problems and regular out of memory errors. This only seems to occur when multiple users are working on the same Db Pro project.

As you can imagine, this has added to development time on projects... ok not hugely but certainly a day or maybe two in terms of charge-able hours and in terms of developer frustration and tense working environment!

My advice to you would be keep it to single user projects where possible... if not, maybe wait for SP1 or Orcas before switching to DataDude.

Monday, 18 June 2007

BUG REPORT: VS Db Pro - TFS - Get Latest Version doesn't update error list

When using Visual Studio Database Professional (DataDude) and Team Foundation Server there are synching problems between team members when using "Get Latest Version" - the error list does not update to reflect changes. Allow me to give you an example:

  1. DeveloperA deletes TableA and checks in. At this point, StoredProcA is in error because it depends on DatabaseObjectA - this is shown in DeveloperA's error list in Visual Studio.
  2. DeveloperB does "Get Latest Version" on the project
  3. DeveloperB corrects StoredProcA and checks in.
  4. DeveloperA does "Get Latest Version" on the project
After the Get Latest, you would expect that DeveloperA's error list is now clear of the aforemetioned errors. Unfortunately it isn't. To get rid of these errors, Visual Studio must be restarted OR each error double-clicked. And every now and then, only a reboot will resolve it.

Have logged it with Microsoft Connect... you can view/watch it at https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=285730


******************************** UPDATE ********************************

This is the response from Microsoft.

Unfortunately, this is expected behavior. The database project doesn't know when all of the files have finished being updated, so it doesn't know when to refresh itself. When the project is in this situation, there is an information message in the info strip at the top of schema view telling you that you need to refresh. You can refresh the project one of these ways: . do a refresh on the solution explorer (button on the toolbar of solution explorer) . open the file that was modified . build . close/reopen the project

FEATURE REQUEST: VS DbPro - Refactor/Delete function

If you have been using Db Pro (aka DataDude) you may have hit this little headache during your build phase... if you delete a column, table, stored proc, function or other object, references to it are not deleted thus leaving your project in an error state which is simple to fix however time-consuming and unnecessary.

I've suggested this should be a prompted action whenever you do a schema view delete... and if the action is not carried out, then all affected items should be flagged.

If you've hit this little nugget of joy, please vote for the feature request...

https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=285727

************** UPDATE FROM MICROSOFT **************

This is a good suggestion, thank you for your feedback. We are considering your suggestion for future release.
Posted by Microsoft on 22/08/2007 at 9:45 AM
************** ANOTHER UPDATE FROM MICROSOFT **************
We are concerned about two potential problems around how we understand your suggestion:1) Cascading deletions. If we were to enable deletion of dependent objects automatically, the cascade could be disasterous2) Statement level references. Do you envision deletion of referencing statements within programmability objects or deletion of the programmability object (function, sproc, etc.) as a whole?We do believe that the deletion of child objects, such as indexes or triggers on a table being deleted, is beneficial.We'd be interested in additional details you could provide on the scenario that you envision
Posted by Microsoft on 8/22/2007 at 9:45 AM

I agree that a cascading delete would be far too heavy-handed. An excellent solution would be something similar to a 'schema compare'.
For example, let's say I select Refactor>Delete on an object called [Qty] on the [Order] table. A window appears looking very like the schema compare window which lists all objects dependent on [Order].[Qty] broken down by schema type. When an object is selected, the object definition is shown below with the references to [Order].[Qty] highlighted and below that the update script (and the cursor in the update script is positioned to the current object).
I have 3 Update Actions available to each object in the list - Skip, Drop and Update.When I select, Drop, the Update script is automatically updated to carry out that action.When I select Update, I then remove or replace references to [Order].[Qty] in the update script for the current object.When I select Skip, the object is not changed in any way.
Once all changes are done, I can then hit 'Write Updates' to apply the changes including the delete of [Order].[Qty].
This implies that a 'Find Orphaned References' should also exist which identifies references to deleted objects ie when the 'Skip' update action was selected. And again, a similar approach could be taken as outlined above.
This approach covers both your questions ie eliminating the impact of cascading deletes and amending programmability objects as well as deleting them.
Posted by khodges on 8/29/2007 at 5:33 PM