content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: Microsoft Office 2007 file type, Mime types and identifying characters Where can I find a list of all of the MIME types and the identifying characters for Microsoft Office 2007 files? I have an upload form that is restricting uploads based on the extensions and identifying characters, but I cannot seem to find the Office 2007 MIME types. Can anyone help? A: Office 2007 MIME Types for IIS .docm, application/vnd.ms-word.document.macroEnabled.12 .docx, application/vnd.openxmlformats-officedocument.wordprocessingml.document .dotm, application/vnd.ms-word.template.macroEnabled.12 .dotx, application/vnd.openxmlformats-officedocument.wordprocessingml.template .potm, application/vnd.ms-powerpoint.template.macroEnabled.12 .potx, application/vnd.openxmlformats-officedocument.presentationml.template .ppam, application/vnd.ms-powerpoint.addin.macroEnabled.12 .ppsm, application/vnd.ms-powerpoint.slideshow.macroEnabled.12 .ppsx, application/vnd.openxmlformats-officedocument.presentationml.slideshow .pptm, application/vnd.ms-powerpoint.presentation.macroEnabled.12 .pptx, application/vnd.openxmlformats-officedocument.presentationml.presentation .xlam, application/vnd.ms-excel.addin.macroEnabled.12 .xlsb, application/vnd.ms-excel.sheet.binary.macroEnabled.12 .xlsm, application/vnd.ms-excel.sheet.macroEnabled.12 .xlsx, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet .xltm, application/vnd.ms-excel.template.macroEnabled.12 .xltx, application/vnd.openxmlformats-officedocument.spreadsheetml.template
Microsoft Office 2007 file type, Mime types and identifying characters
Where can I find a list of all of the MIME types and the identifying characters for Microsoft Office 2007 files? I have an upload form that is restricting uploads based on the extensions and identifying characters, but I cannot seem to find the Office 2007 MIME types. Can anyone help?
[ "Office 2007 MIME Types for IIS\n\n.docm, application/vnd.ms-word.document.macroEnabled.12\n.docx, application/vnd.openxmlformats-officedocument.wordprocessingml.document\n.dotm, application/vnd.ms-word.template.macroEnabled.12\n.dotx, application/vnd.openxmlformats-officedocument.wordprocessingml.template\n.potm, application/vnd.ms-powerpoint.template.macroEnabled.12\n.potx, application/vnd.openxmlformats-officedocument.presentationml.template\n.ppam, application/vnd.ms-powerpoint.addin.macroEnabled.12\n.ppsm, application/vnd.ms-powerpoint.slideshow.macroEnabled.12\n.ppsx, application/vnd.openxmlformats-officedocument.presentationml.slideshow\n.pptm, application/vnd.ms-powerpoint.presentation.macroEnabled.12\n.pptx, application/vnd.openxmlformats-officedocument.presentationml.presentation\n.xlam, application/vnd.ms-excel.addin.macroEnabled.12\n.xlsb, application/vnd.ms-excel.sheet.binary.macroEnabled.12\n.xlsm, application/vnd.ms-excel.sheet.macroEnabled.12\n.xlsx, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\n.xltm, application/vnd.ms-excel.template.macroEnabled.12\n.xltx, application/vnd.openxmlformats-officedocument.spreadsheetml.template\n\n" ]
[ 26 ]
[]
[]
[ "file_type", "mime", "office_2007" ]
stackoverflow_0000000061_file_type_mime_office_2007.txt
Q: XSD DataSets and ignoring foreign keys I have a pretty standard table set-up in a current application using the .NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter() taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice. A: You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you.
XSD DataSets and ignoring foreign keys
I have a pretty standard table set-up in a current application using the .NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter() taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice.
[ "You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you.\n" ]
[ 13 ]
[]
[]
[ ".net", "database", "xsd" ]
stackoverflow_0000000134_.net_database_xsd.txt
Q: What is the meaning of the type safety warning in certain Java generics casts? What is the meaning of the Java warning? Type safety: The cast from Object to List<Integer> is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List<Integer> list = (List<Integer>) object; A: This warning is there because Java is not actually storing type information at run-time in an object that uses generics. Thus, if object is actually a List<String>, there will be no ClassCastException at run-time except until an item is accessed from the list that doesn't match the generic type defined in the variable. This can cause further complications if items are added to the list, with this incorrect generic type information. Any code still holding a reference to the list but with the correct generic type information will now have an inconsistent list. To remove the warning, try: List<?> list = (List<?>) object; However, note that you will not be able to use certain methods such as add because the compiler doesn't know if you are trying to add an object of incorrect type. The above will work in a lot of situations, but if you have to use add, or some similarly restricted method, you will just have to suffer the yellow underline in Eclipse (or a SuppressWarning annotation).
What is the meaning of the type safety warning in certain Java generics casts?
What is the meaning of the Java warning? Type safety: The cast from Object to List<Integer> is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List<Integer> list = (List<Integer>) object;
[ "This warning is there because Java is not actually storing type information at run-time in an object that uses generics. Thus, if object is actually a List<String>, there will be no ClassCastException at run-time except until an item is accessed from the list that doesn't match the generic type defined in the variable. \nThis can cause further complications if items are added to the list, with this incorrect generic type information. Any code still holding a reference to the list but with the correct generic type information will now have an inconsistent list. \nTo remove the warning, try:\nList<?> list = (List<?>) object;\n\nHowever, note that you will not be able to use certain methods such as add because the compiler doesn't know if you are trying to add an object of incorrect type. The above will work in a lot of situations, but if you have to use add, or some similarly restricted method, you will just have to suffer the yellow underline in Eclipse (or a SuppressWarning annotation).\n" ]
[ 53 ]
[]
[]
[ "casting", "generics", "java", "type_safety", "warnings" ]
stackoverflow_0000000382_casting_generics_java_type_safety_warnings.txt
Q: Floating Point Number parsing: Is there a Catch All algorithm? One of the fun parts of multi-cultural programming is number formats. Americans use 10,000.50 Germans use 10.000,50 French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the .net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format? A: I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again. A: I don't know the ASP.NET side of the problem but .NET has a pretty powerful class: System.Globalization.CultureInfo. You can use the following code to parse a string containing a double value: double d = double.Parse("100.20", CultureInfo.CurrentCulture); // -- OR -- double d = double.Parse("100.20", CultureInfo.CurrentUICulture); If ASP.NET somehow (i.e. using HTTP Request headers) passes current user's CultureInfo to either CultureInfo.CurrentCulture or CultureInfo.CurrentUICulture, these will work fine. A: You can't please everyone. If I enter ten as 10.000, and someone enters ten thousand as 10.000, you cannot handle that without some knowledge of the culture of the input. Detect the culture somehow (browser, system setting - what is the use case? ASP? Internal app, or open to the world?), or provide an example of the expected formatting, and use the most lenient parser you can. Probably something like: double d = Double.Parse("5,000.00", NumberStyles.Any, CultureInfo.InvariantCulture); A: The difference between 12.345 in French and English is a factor of 1000. If you supply an expected range where max < 1000*min, you can easily guess. Take for example the height of a person (including babies and children) in mm. By using a range of 200-3000, an input of 1.800 or 1,800 can unambiguously be interpreted as 1 meter and 80 centimeters, whereas an input of 912.300 or 912,300 can unambiguously be interpreted as 91 centimeters and 2.3 millimeters.
Floating Point Number parsing: Is there a Catch All algorithm?
One of the fun parts of multi-cultural programming is number formats. Americans use 10,000.50 Germans use 10.000,50 French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the .net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format?
[ "I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again.\n", "I don't know the ASP.NET side of the problem but .NET has a pretty powerful class: System.Globalization.CultureInfo. You can use the following code to parse a string containing a double value:\ndouble d = double.Parse(\"100.20\", CultureInfo.CurrentCulture);\n// -- OR --\ndouble d = double.Parse(\"100.20\", CultureInfo.CurrentUICulture);\n\nIf ASP.NET somehow (i.e. using HTTP Request headers) passes current user's CultureInfo to either CultureInfo.CurrentCulture or CultureInfo.CurrentUICulture, these will work fine.\n", "You can't please everyone. If I enter ten as 10.000, and someone enters ten thousand as 10.000, you cannot handle that without some knowledge of the culture of the input. Detect the culture somehow (browser, system setting - what is the use case? ASP? Internal app, or open to the world?), or provide an example of the expected formatting, and use the most lenient parser you can. Probably something like:\ndouble d = Double.Parse(\"5,000.00\", NumberStyles.Any, CultureInfo.InvariantCulture);\n\n", "The difference between 12.345 in French and English is a factor of 1000. If you supply an expected range where max < 1000*min, you can easily guess. \nTake for example the height of a person (including babies and children) in mm.\nBy using a range of 200-3000, an input of 1.800 or 1,800 can unambiguously be interpreted as 1 meter and 80 centimeters, whereas an input of 912.300 or 912,300 can unambiguously be interpreted as 91 centimeters and 2.3 millimeters.\n" ]
[ 31, 27, 12, 10 ]
[]
[]
[ ".net", "asp.net", "c#", "globalization", "internationalization" ]
stackoverflow_0000000192_.net_asp.net_c#_globalization_internationalization.txt
Q: Homegrown consumption of web services I've been writing a few web services for a .net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this? A: No, what you're doing is fine. Don't let those people confuse you. If you've written the web services with .net then the reference proxies generated by .net are going to be quite suitable. The situation you describe (where you are both producer and consumer) is the ideal situation. If you need to connect to a web services that is unknown at compile time, then you would want a more dynamic approach, where you deduce the 'shape' of the web service. But start by using the auto generated proxy class, and don't worry about it until you hit a limitation. And when you do -- come back to stack overflow ;-)
Homegrown consumption of web services
I've been writing a few web services for a .net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this?
[ "No, what you're doing is fine. Don't let those people confuse you.\nIf you've written the web services with .net then the reference proxies generated by .net are going to be quite suitable. The situation you describe (where you are both producer and consumer) is the ideal situation.\nIf you need to connect to a web services that is unknown at compile time, then you would want a more dynamic approach, where you deduce the 'shape' of the web service. \nBut start by using the auto generated proxy class, and don't worry about it until you hit a limitation. And when you do -- come back to stack overflow ;-)\n" ]
[ 11 ]
[]
[]
[ ".net", "web_services" ]
stackoverflow_0000000470_.net_web_services.txt
Q: Lucene Score results In Lucene if you had multiple indexes that covered only one partition each. Why does the same search on different indexes return results with different scores? The results from different servers match exactly. i.e. if I searched for : Name - John Smith DOB - 11/11/1934 Partition 0 would return a score of 0.345 Partition 1 would return a score of 0.337 Both match exactly on name and DOB. A: The scoring contains the Inverse Document Frequency(IDF). If the term "John Smith" is in one partition, 0, 100 times and in partition 1, once. The score for searching for John Smith would be higher search in partition 1 as the term is more scarce. To get round this you would wither have to have your index being over all partitions, or you would need to override the IDF. A: Because the score is determined on the index if I am not completely mistaken. If you have different indexes (more/less or different data that was indexed), the score will differ: http://lucene.apache.org/core/3_6_0/scoring.html (Warning: Contains Math :-)) A: You may also be interested in the output of the explain() method, and the resulting Explanation object, which will give you an idea of how things are scored the way they are.
Lucene Score results
In Lucene if you had multiple indexes that covered only one partition each. Why does the same search on different indexes return results with different scores? The results from different servers match exactly. i.e. if I searched for : Name - John Smith DOB - 11/11/1934 Partition 0 would return a score of 0.345 Partition 1 would return a score of 0.337 Both match exactly on name and DOB.
[ "The scoring contains the Inverse Document Frequency(IDF). If the term \"John Smith\" is in one partition, 0, 100 times and in partition 1, once. The score for searching for John Smith would be higher search in partition 1 as the term is more scarce.\nTo get round this you would wither have to have your index being over all partitions, or you would need to override the IDF.\n", "Because the score is determined on the index if I am not completely mistaken. \nIf you have different indexes (more/less or different data that was indexed), the score will differ:\nhttp://lucene.apache.org/core/3_6_0/scoring.html\n(Warning: Contains Math :-))\n", "You may also be interested in the output of the explain() method, and the resulting Explanation object, which will give you an idea of how things are scored the way they are.\n" ]
[ 20, 13, 9 ]
[]
[]
[ "lucene", "search" ]
stackoverflow_0000000387_lucene_search.txt
Q: How to write to Web.Config in Medium Trust? Uploading my first decently sized web app to my shared host provided me with a fresh set of challenges, by which I mean, sleepless nights. The issue was that I had most certainly not developed my application for medium trust (or had any clue what that was.) I mitigated all of the issues, save one. I had written an installer for the admin to be able to specify their connection string and other preferences, but I cannot find a way to write to a web.config in medium trust. Does anyone have a solution, or should I just be putting preferences in another file? A: That actually sounds like IIS's Low level. If it is, then you won't be able to write to any file, not just the web.config. Here are the levels from IIS's help file: Full (internal) - Specifies unrestricted permissions. Grants the ASP.NET application permissions to access any resource that is subject to operating system security. All privileged operations are supported. High (web_hightrust.config) - Specifies a high level of code access security, which means that the application cannot do any one of the following things by default: Call unmanaged code. Call serviced components. Write to the event log. Access Message Queuing service queues. Access ODBC, OleDb, or Oracle data sources. Medium (web_mediumtrust.config) - Specifies a medium level of code access security, which means that, in addition to High Trust Level restrictions, the ASP.NET application cannot do any of the following things by default: Access files outside the application directory. Access the registry. Make network or Web service calls. Low (web_lowtrust.config) - Specifies a low level of code access security, which means that, in addition to Medium Trust Level restrictions, the application cannot do any of the following things by default: Write to the file system. Call the Assert method. Minimal (web_minimaltrust.config) - Specifies a minimal level of code access security, which means that the application has only execute permissions. I would suggest that if you are dead set on having an installer, have it create a web.config in memory that the user can save locally and FTP up afterward.
How to write to Web.Config in Medium Trust?
Uploading my first decently sized web app to my shared host provided me with a fresh set of challenges, by which I mean, sleepless nights. The issue was that I had most certainly not developed my application for medium trust (or had any clue what that was.) I mitigated all of the issues, save one. I had written an installer for the admin to be able to specify their connection string and other preferences, but I cannot find a way to write to a web.config in medium trust. Does anyone have a solution, or should I just be putting preferences in another file?
[ "That actually sounds like IIS's Low level. If it is, then you won't be able to write to any file, not just the web.config.\nHere are the levels from IIS's help file:\n\n\n\nFull (internal) - Specifies unrestricted permissions. Grants the ASP.NET application permissions to access any resource that is subject to operating system security. All privileged operations are supported.\n\nHigh (web_hightrust.config) - Specifies a high level of code access security, which means that the application cannot do any one of the following things by default:\n\nCall unmanaged code.\nCall serviced components.\nWrite to the event log.\nAccess Message Queuing service queues.\nAccess ODBC, OleDb, or Oracle data sources.\n\n\nMedium (web_mediumtrust.config) - Specifies a medium level of code access security, which means that, in addition to High Trust Level restrictions, the ASP.NET application cannot do any of the following things by default:\n\nAccess files outside the application directory.\nAccess the registry.\nMake network or Web service calls.\n\n\nLow (web_lowtrust.config) - Specifies a low level of code access security, which means that, in addition to Medium Trust Level restrictions, the application cannot do any of the following things by default:\n\nWrite to the file system.\nCall the Assert method.\n\n\nMinimal (web_minimaltrust.config) - Specifies a minimal level of code access security, which means that the application has only execute permissions.\n\n\n\nI would suggest that if you are dead set on having an installer, have it create a web.config in memory that the user can save locally and FTP up afterward.\n" ]
[ 24 ]
[]
[]
[ "asp.net", "c#", "medium_trust" ]
stackoverflow_0000000562_asp.net_c#_medium_trust.txt
Q: Visual Studio Setup Project - Per User Registry Settings I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that This isn't possible This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?). With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project? And, at the moment the project lists five registry root keys (HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above. A: First: Yes, this is something that belongs in the Application for the exact reson you specified: What happens after new user profiles are created? Sure, if you're using a domain it's possible to have some stuff put in the registry on creation, but this is not really a use case. The Application should check if there are seetings and use the default settings if not. That being said, it IS possible to change other users Keys through the HKEY_USERS Hive. I have no experience with the Visual Studio 2003 Setup Project, so here is a bit of (totally unrelated) VBScript code that might just give you an idea where to look: const HKEY_USERS = &H80000003 strComputer = "." Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv") strKeyPath = "" objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing" For Each subkey In arrSubKeys objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944 Next (Code Courtesy of Jeroen Ritmeijer) A: I'm guessing that because you want to set it for all users, that you're on some kind of shared computer, which is probably running under a domain? HERE BE DRAGONS Let's say Joe and Jane regularly log onto the computer, then they will each have 'registries'. You'll then install your app, and the installer will employ giant hacks and disgusting things to set items under HKCU for them. THEN, bob will come along and log on (he, and 500 other people have accounts in the domain and so can do this). He's never used this computer before, so he has no registry. The first time he logs in, windows creates him one, but he won't have your setting. Your app then falls over or behaves incorrectly, and bob complains loudly about those crappy products from raynixon incorporated. The correct answer is to just have some default settings in your app, which can write them to the registry if it doesn't find them. It's general good practice that your app should never depend on the registry, and should create things as needed, for any registry entry, not just HKCU, anyway A: I'm partway to my solution with this entry on MSDN (don't know how I couldn't find it before). User/Machine Hive Subkeys and values entered under this hive will be installed under the HKEY_CURRENT_USER hive when a user chooses "Just Me" or the HKEY_USERS hive or when a user chooses "Everyone" during installation. Registry Editor Archive of MSDN Article A: Despite what the MSDN article Archive of MSDN Article says about User/Machine Hive, it doesn't write to HKEY_USERS. Rather it writes to HKCU if you select Just Me and HKLM if you select everyone. So my solution is going to be to use the User/Machine Hive, and then in the application it checks if the registry entries are in HKCU and if not, copies them from HKLM. I know this probably isn't the most ideal way of doing it, but it has the least amount of changes.
Visual Studio Setup Project - Per User Registry Settings
I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that This isn't possible This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?). With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project? And, at the moment the project lists five registry root keys (HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above.
[ "First: Yes, this is something that belongs in the Application for the exact reson you specified: What happens after new user profiles are created? Sure, if you're using a domain it's possible to have some stuff put in the registry on creation, but this is not really a use case. The Application should check if there are seetings and use the default settings if not.\nThat being said, it IS possible to change other users Keys through the HKEY_USERS Hive.\nI have no experience with the Visual Studio 2003 Setup Project, so here is a bit of (totally unrelated) VBScript code that might just give you an idea where to look:\nconst HKEY_USERS = &H80000003\nstrComputer = \".\"\nSet objReg=GetObject(\"winmgmts:{impersonationLevel=impersonate}!\\\\\" & strComputer & \"\\root\\default:StdRegProv\")\nstrKeyPath = \"\"\nobjReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys\nstrKeyPath = \"\\Software\\Microsoft\\Windows\\CurrentVersion\\WinTrust\\Trust Providers\\Software Publishing\"\nFor Each subkey In arrSubKeys\n objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, \"State\", 146944\nNext\n\n(Code Courtesy of Jeroen Ritmeijer)\n", "I'm guessing that because you want to set it for all users, that you're on some kind of shared computer, which is probably running under a domain?\nHERE BE DRAGONS\nLet's say Joe and Jane regularly log onto the computer, then they will each have 'registries'.\nYou'll then install your app, and the installer will employ giant hacks and disgusting things to set items under HKCU for them.\nTHEN, bob will come along and log on (he, and 500 other people have accounts in the domain and so can do this). He's never used this computer before, so he has no registry. The first time he logs in, windows creates him one, but he won't have your setting. \nYour app then falls over or behaves incorrectly, and bob complains loudly about those crappy products from raynixon incorporated.\nThe correct answer is to just have some default settings in your app, which can write them to the registry if it doesn't find them. It's general good practice that your app should never depend on the registry, and should create things as needed, for any registry entry, not just HKCU, anyway\n", "I'm partway to my solution with this entry on MSDN (don't know how I couldn't find it before).\nUser/Machine Hive\nSubkeys and values entered under this hive will be installed under the HKEY_CURRENT_USER hive when a user chooses \"Just Me\" or the HKEY_USERS hive or when a user chooses \"Everyone\" during installation.\nRegistry Editor Archive of MSDN Article\n", "Despite what the MSDN article Archive of MSDN Article says about User/Machine Hive, it doesn't write to HKEY_USERS. Rather it writes to HKCU if you select Just Me and HKLM if you select everyone.\nSo my solution is going to be to use the User/Machine Hive, and then in the application it checks if the registry entries are in HKCU and if not, copies them from HKLM. I know this probably isn't the most ideal way of doing it, but it has the least amount of changes.\n" ]
[ 6, 6, 3, 2 ]
[]
[]
[ "installation", "registry", "visual_studio", "windows" ]
stackoverflow_0000000810_installation_registry_visual_studio_windows.txt
Q: Client collation and SQL Server 2005 We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this. I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me? A: All editions of SQL Server 2000/2005/2008 support having multiple databases, each using their own collation sequence. You don't need the Enterprise version. When you have a database that uses a collation sequence that is different from default collation for the database server, you will need to take some extra precautions if you use temporary tables and/or table variables. Temp tables/variables live in the tempdb database, which uses the collation seqyuence used by by the master databases. Just remember to use "COLLATE database_default" when defining character fields in the temp tables/variables. I blogged about that not too long ago, if you want some more details.
Client collation and SQL Server 2005
We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this. I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me?
[ "All editions of SQL Server 2000/2005/2008 support having multiple databases, each using their own collation sequence. You don't need the Enterprise version. \nWhen you have a database that uses a collation sequence that is different from default collation for the database server, you will need to take some extra precautions if you use temporary tables and/or table variables. Temp tables/variables live in the tempdb database, which uses the collation seqyuence used by by the master databases. Just remember to use \"COLLATE database_default\" when defining character fields in the temp tables/variables. I blogged about that not too long ago, if you want some more details.\n" ]
[ 7 ]
[]
[]
[ "sql_server", "sql_server_2005", "windows_server_2003" ]
stackoverflow_0000000905_sql_server_sql_server_2005_windows_server_2003.txt
Q: Upgrading SQL Server 6.5 Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: Back up the database in question and Master on 6.5. Execute SQL Server 2000's instcat.sql against 6.5's Master. This allows SQL Server 2000's OLEDB provider to connect to 6.5. Use SQL Server 2000's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5's tables to a new 2005 database (also using OLEDB). Use 6.5's Enterprise Manager to script out all of the database's indexes and triggers to a .sql file. Execute that .sql file against the new copy of the database, in 2005's Management Studio. Use 6.5's Enterprise Manager to script out all of the stored procedures. Execute that .sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. Corrected all of those issues and re-executed the .sql file. Recreated the 6.5's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement. A: Hey, I'm still stuck in that camp too. The third party application we have to support is FINALLY going to 2K5, so we're almost out of the wood. But I feel your pain 8^D That said, from everything I heard from our DBA, the key is to convert the database to 8.0 format first, and then go to 2005. I believe they used the built in migration/upgrade tools for this. There are some big steps between 6.5 and 8.0 that are better solved there than going from 6.5 to 2005 directly. Your BIGGEST pain, if you didn't know already, is that DTS is gone in favor of SSIS. There is a shell type module that will run your existing DTS packages, but you're going to want to manually recreate them all in SSIS. Ease of this will depend on the complexity of the packages themselves, but I've done a few at work so far and they've been pretty smooth. A: You can upgrade 6.5 to SQL Server 2000. You may have an easier time getting a hold of SQL Server or the 2000 version of the MSDE. Microsoft has a page on going from 6.5 to 2000. Once you have the database in 2000 format, SQL Server 2005 will have no trouble upgrading it to the 2005 format. If you don't have SQL Server 2000, you can download the MSDE 2000 version directly from Microsoft. A: I am by no means authoritative, but I believe the only supported path is from 6.5 to 7. Certainly that would be the most sane route, then I believe you can migrate from 7 directly to 2005 pretty painlessly. As for scripting out all the objects - I would advise against it as you will inevitably miss something (unless your database is truly trivial). A: If you can find a professional or some other super-enterprise version of Visual Studio 6.0 - it came with a copy of MSDE (Basically the predecessor to SQL Express). I believe MSDE 2000 is still available as a free download from Microsoft, but I don't know if you can migrate directly from 6.5 to 2000. I think in concept, you won't likely face any danger. Years of practice however tell me that you will always miss some object, permission, or other database item that won't manifest itself immediately. If you can script out the entire dump, the better. As you will be less likely to miss something - and if you do miss something, it can be easily added to the script and fixed. I would avoid any manual steps (other than hitting the enter key once) like the plague.
Upgrading SQL Server 6.5
Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: Back up the database in question and Master on 6.5. Execute SQL Server 2000's instcat.sql against 6.5's Master. This allows SQL Server 2000's OLEDB provider to connect to 6.5. Use SQL Server 2000's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5's tables to a new 2005 database (also using OLEDB). Use 6.5's Enterprise Manager to script out all of the database's indexes and triggers to a .sql file. Execute that .sql file against the new copy of the database, in 2005's Management Studio. Use 6.5's Enterprise Manager to script out all of the stored procedures. Execute that .sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. Corrected all of those issues and re-executed the .sql file. Recreated the 6.5's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement.
[ "Hey, I'm still stuck in that camp too. The third party application we have to support is FINALLY going to 2K5, so we're almost out of the wood. But I feel your pain 8^D\nThat said, from everything I heard from our DBA, the key is to convert the database to 8.0 format first, and then go to 2005. I believe they used the built in migration/upgrade tools for this. There are some big steps between 6.5 and 8.0 that are better solved there than going from 6.5 to 2005 directly.\nYour BIGGEST pain, if you didn't know already, is that DTS is gone in favor of SSIS. There is a shell type module that will run your existing DTS packages, but you're going to want to manually recreate them all in SSIS. Ease of this will depend on the complexity of the packages themselves, but I've done a few at work so far and they've been pretty smooth.\n", "You can upgrade 6.5 to SQL Server 2000. You may have an easier time getting a hold of SQL Server or the 2000 version of the MSDE. Microsoft has a page on going from 6.5 to 2000. Once you have the database in 2000 format, SQL Server 2005 will have no trouble upgrading it to the 2005 format. \nIf you don't have SQL Server 2000, you can download the MSDE 2000 version directly from Microsoft.\n", "I am by no means authoritative, but I believe the only supported path is from 6.5 to 7. Certainly that would be the most sane route, then I believe you can migrate from 7 directly to 2005 pretty painlessly.\nAs for scripting out all the objects - I would advise against it as you will inevitably miss something (unless your database is truly trivial).\n", "If you can find a professional or some other super-enterprise version of Visual Studio 6.0 - it came with a copy of MSDE (Basically the predecessor to SQL Express). I believe MSDE 2000 is still available as a free download from Microsoft, but I don't know if you can migrate directly from 6.5 to 2000.\nI think in concept, you won't likely face any danger. Years of practice however tell me that you will always miss some object, permission, or other database item that won't manifest itself immediately. If you can script out the entire dump, the better. As you will be less likely to miss something - and if you do miss something, it can be easily added to the script and fixed. I would avoid any manual steps (other than hitting the enter key once) like the plague.\n" ]
[ 11, 6, 3, 3 ]
[]
[]
[ "migration", "sql_server" ]
stackoverflow_0000000194_migration_sql_server.txt
Q: Why doesn't SQL Full Text Indexing return results for words containing #? For instance, my query is like the following using SQL Server 2005: SELECT * FROM Table WHERE FREETEXT(SearchField, 'c#') I have a full text index defined to use the column SearchField which returns results when using: SELECT * FROM Table WHERE SearchField LIKE '%c#%' I believe # is a special letter, so how do I allow FREETEXT to work correctly for the query above? A: The # char is indexed as punctuation and therefore ignored, so it looks like we'll remove the letter C from our word indexing ignore lists. Tested it locally after doing that and rebuilding the indexes and I get results! Looking at using a different word breaker language on the indexed column, so that those special characters aren't ignored. EDIT: I also found this information: c# is indexed as c (if c is not in your noise word list, see more on noise word lists later), but C# is indexed as C# (in SQL 2005 and SQL 2000 running on Win2003 regardless if C or c is in your noise word list). It is not only C# that is stored as C#, but any capital letter followed by #. Conversely, c++ ( and any other lower-cased letter followed by a ++) is indexed as c (regardless of whether c is in your noise word list). A: Quoting a much-replicated help page about Indexing Service query language: To use specially treated characters such as &, |, ^, #, @, $, (, ), in a query, enclose your query in quotation marks (“). As far as I know, full text search in MSSQL is also done by the Indexing Service, so this might help.
Why doesn't SQL Full Text Indexing return results for words containing #?
For instance, my query is like the following using SQL Server 2005: SELECT * FROM Table WHERE FREETEXT(SearchField, 'c#') I have a full text index defined to use the column SearchField which returns results when using: SELECT * FROM Table WHERE SearchField LIKE '%c#%' I believe # is a special letter, so how do I allow FREETEXT to work correctly for the query above?
[ "The # char is indexed as punctuation and therefore ignored, so it looks like we'll remove the letter C from our word indexing ignore lists.\nTested it locally after doing that and rebuilding the indexes and I get results!\nLooking at using a different word breaker language on the indexed column, so that those special characters aren't ignored.\nEDIT: I also found this information:\n\nc# is indexed as c (if c is not in your noise word list, see more on noise word lists later), but C# is indexed as C# (in SQL 2005 and SQL 2000 running on Win2003 regardless if C or c is in your noise word list). It is not only C# that is stored as C#, but any capital letter followed by #. Conversely, c++ ( and any other lower-cased letter followed by a ++) is indexed as c (regardless of whether c is in your noise word list).\n\n", "Quoting a much-replicated help page about Indexing Service query language:\n\nTo use specially treated characters such as &, |, ^, #, @, $, (, ), in a query, enclose your query in quotation marks (“).\n\nAs far as I know, full text search in MSSQL is also done by the Indexing Service, so this might help.\n" ]
[ 14, 1 ]
[]
[]
[ "full_text_search", "indexing", "sql", "sql_server", "sql_server_2005" ]
stackoverflow_0000001042_full_text_search_indexing_sql_sql_server_sql_server_2005.txt
Q: Displaying Flash content in a C# WinForms application What is the best way to display Flash content in a C# WinForms application? I would like to create a user control (similar to the current PictureBox) that will be able to display images and flash content. It would be great to be able to load the flash content from a stream of sorts rather than a file on disk. A: While I haven't used a flash object inside a windows form application myself, I do know that it's possible. In Visual studio on your toolbox, choose to add a new component. Then in the new window that appears choose the "COM Components" tab to get a list in which you can find the "Shockwave Flash Object" Once added to the toolbox, simply use the control as you would use any other "standard" control from visual studio. three simple commands are available to interact with the control: AxShockwaveFlash1.Stop() AxShockwaveFlash1.Movie = FilePath & "\FileName.swf" AxShockwaveFlash1.Play() which, I think, are all self explanatory. It would be great to be able to load the flash content from a stream of sorts rather than a file on disk. I just saw you are also looking for a means to load the content from a stream, and because I'm not really sure that is possible with the shockwave flash object I will give you another option (two actually). the first is the one I would advise you to use only when necessary, as it uses the full blown "webbrowser component" (also available as an extra toolbox item), which is like trying to shoot a fly with a bazooka. of course it will work, as the control will act as a real browser window (actually the internet explorer browser), but its not really meant to be used in the way you need it. the second option is to use something I just discovered while looking for more information about playing flash content inside a windows form. F-IN-BOX is a commercial solution that will also play content from a given website URL. (The link provided will direct you to the .NET code you have to use). A: Sven, you reached the same conclusion as I did: I found the Shockwave Flash Object, all be it from a slightly different route, but was stumped on how to load the files from somewhere other than file on disk/URL. The F-IN-BOX, although just a wrapper of the Shockwave Flash Object seems to provide much more functionality, which may just help me! Shooting flys with bazookas may be fun, but an embeded web brower is not the path that I am looking for. :) There was a link on Adobe's site that talked about "Embedding and Communicating with the Macromedia Flash Player in C# Windows Applications" but they seem to have removed it :(
Displaying Flash content in a C# WinForms application
What is the best way to display Flash content in a C# WinForms application? I would like to create a user control (similar to the current PictureBox) that will be able to display images and flash content. It would be great to be able to load the flash content from a stream of sorts rather than a file on disk.
[ "While I haven't used a flash object inside a windows form application myself, I do know that it's possible.\nIn Visual studio on your toolbox, choose to add a new component.\nThen in the new window that appears choose the \"COM Components\" tab to get a list in which you can find the \"Shockwave Flash Object\"\nOnce added to the toolbox, simply use the control as you would use any other \"standard\" control from visual studio.\nthree simple commands are available to interact with the control:\n\nAxShockwaveFlash1.Stop()\nAxShockwaveFlash1.Movie = FilePath &\n\"\\FileName.swf\"\nAxShockwaveFlash1.Play()\n\nwhich, I think, are all self explanatory.\n\nIt would be great to be able to load\n the flash content from a stream of\n sorts rather than a file on disk.\n\nI just saw you are also looking for a means to load the content from a stream,\nand because I'm not really sure that is possible with the shockwave flash object I will give you another option (two actually).\nthe first is the one I would advise you to use only when necessary, as it uses the full blown \"webbrowser component\" (also available as an extra toolbox item), which is like trying to shoot a fly with a bazooka.\nof course it will work, as the control will act as a real browser window (actually the internet explorer browser), but its not really meant to be used in the way you need it.\nthe second option is to use something I just discovered while looking for more information about playing flash content inside a windows form. F-IN-BOX is a commercial solution that will also play content from a given website URL. (The link provided will direct you to the .NET code you have to use).\n", "Sven, you reached the same conclusion as I did: I found the Shockwave Flash Object, all be it from a slightly different route, but was stumped on how to load the files from somewhere other than file on disk/URL. The F-IN-BOX, although just a wrapper of the Shockwave Flash Object seems to provide much more functionality, which may just help me!\nShooting flys with bazookas may be fun, but an embeded web brower is not the path that I am looking for. :)\nThere was a link on Adobe's site that talked about \"Embedding and Communicating with the Macromedia Flash Player in C# Windows Applications\" but they seem to have removed it :(\n" ]
[ 33, 8 ]
[]
[]
[ "adobe", "c#", "flash", "macromedia", "winforms" ]
stackoverflow_0000001037_adobe_c#_flash_macromedia_winforms.txt
Q: ViewState invalid only in Safari One of the sites I maintain relies heavily on the use of ViewState (it isn't my code). However, on certain pages where the ViewState is extra-bloated, Safari throws a "Validation of viewstate MAC failed" error. This appears to only happen in Safari. Firefox, IE and Opera all load successfully in the same scenario. A: While I second the Channel 9 solution, also be aware that in some hosted environments Safari is not considered an up-level browser. You may need to add it to your application's browscap in order to make use of some ASP.Net features. That was the root cause of some headaches we had for a client's site that used the ASP Menu control. A: My first port of call would be to go through the elements on the page and see which controls: Will still work when I switch ViewState off Can be moved out of the page and into an AJAX call to be loaded when required Failing that, and here's the disclaimer - I've never used this solution on a web-facing site - but in the past where I've wanted to eliminate massive ViewStates in limited-audience applications I have stored the ViewState in the Session. It has worked for me because the hit to memory isn't significant for the number of users, but if you're running a fairly popular site I wouldn't recommend this approach. However, if the Session solution works for Safari you could always detect the user agent and fudge appropriately. A: I've been doing a little research into this and whilst I'm not entirely sure its the cause I believe it is because Safari is not returning the full result set (hence cropping it). I have been in dicussion with another developer and found the following post on Channel 9 as well which recommends making use of the SQL State service to store the viewstate avoiding the postback issue and also page size. http://channel9.msdn.com/forums/TechOff/250549-ASPNET-ViewState-flawed-architecture/?CommentID=270477#263702 Does this seem like the best solution?
ViewState invalid only in Safari
One of the sites I maintain relies heavily on the use of ViewState (it isn't my code). However, on certain pages where the ViewState is extra-bloated, Safari throws a "Validation of viewstate MAC failed" error. This appears to only happen in Safari. Firefox, IE and Opera all load successfully in the same scenario.
[ "While I second the Channel 9 solution, also be aware that in some hosted environments Safari is not considered an up-level browser. You may need to add it to your application's browscap in order to make use of some ASP.Net features. \nThat was the root cause of some headaches we had for a client's site that used the ASP Menu control.\n", "My first port of call would be to go through the elements on the page and see which controls:\n\nWill still work when I switch ViewState off\nCan be moved out of the page and into an AJAX call to be loaded when required\n\nFailing that, and here's the disclaimer - I've never used this solution on a web-facing site - but in the past where I've wanted to eliminate massive ViewStates in limited-audience applications I have stored the ViewState in the Session.\nIt has worked for me because the hit to memory isn't significant for the number of users, but if you're running a fairly popular site I wouldn't recommend this approach. However, if the Session solution works for Safari you could always detect the user agent and fudge appropriately.\n", "I've been doing a little research into this and whilst I'm not entirely sure its the cause I believe it is because Safari is not returning the full result set (hence cropping it).\nI have been in dicussion with another developer and found the following post on Channel 9 as well which recommends making use of the SQL State service to store the viewstate avoiding the postback issue and also page size.\nhttp://channel9.msdn.com/forums/TechOff/250549-ASPNET-ViewState-flawed-architecture/?CommentID=270477#263702\nDoes this seem like the best solution?\n" ]
[ 5, 3, 2 ]
[]
[]
[ ".net", "c#", "safari", "viewstate" ]
stackoverflow_0000001189_.net_c#_safari_viewstate.txt
Q: Using MSTest with CruiseControl.NET We have been using CruiseControl for quite a while with NUnit and NAnt. For a recent project we decided to use the testing framework that comes with Visual Studio, which so far has been adequate. I'm attempting to get the solution running in CruiseControl. I've finally got the build itself to work; however, I have been unable to get any tests to show up in the CruiseControl interface despite adding custom build tasks and components designed to do just that. Does anyone have a definitive link out there to instructions on getting this set up? A: Not sure if that helps (i found the ccnet Documentation somewhat unhelpful at times): Using CruiseControl.NET with MSTest A: The CC.Net interface is generated via an XSL transform on your XML files put together as specified in the ccnet.config file for your projects. The XSL is already written for things like FxCop - check your server's CC xsl directory for examples - shouldn't be too hard to write your own to add in the info - just remember to add the XML output from your tests into the main log.
Using MSTest with CruiseControl.NET
We have been using CruiseControl for quite a while with NUnit and NAnt. For a recent project we decided to use the testing framework that comes with Visual Studio, which so far has been adequate. I'm attempting to get the solution running in CruiseControl. I've finally got the build itself to work; however, I have been unable to get any tests to show up in the CruiseControl interface despite adding custom build tasks and components designed to do just that. Does anyone have a definitive link out there to instructions on getting this set up?
[ "Not sure if that helps (i found the ccnet Documentation somewhat unhelpful at times):\nUsing CruiseControl.NET with MSTest\n", "The CC.Net interface is generated via an XSL transform on your XML files put together as specified in the ccnet.config file for your projects. The XSL is already written for things like FxCop - check your server's CC xsl directory for examples - shouldn't be too hard to write your own to add in the info - just remember to add the XML output from your tests into the main log.\n" ]
[ 5, 1 ]
[]
[]
[ "build_process", "cruisecontrol.net", "msbuild" ]
stackoverflow_0000001314_build_process_cruisecontrol.net_msbuild.txt
Q: How can I get the authenticated user name under Apache using plain HTTP authentication and PHP? First, let's get the security considerations out of the way. I'm using simple authentication under Apache for a one-off, internal use only, non-internet connected LAN, PHP web app. How can get I the HTTP authenticated user name in PHP? A: I think that you are after this $username = $_SERVER['PHP_AUTH_USER']; $password = $_SERVER['PHP_AUTH_PW'];
How can I get the authenticated user name under Apache using plain HTTP authentication and PHP?
First, let's get the security considerations out of the way. I'm using simple authentication under Apache for a one-off, internal use only, non-internet connected LAN, PHP web app. How can get I the HTTP authenticated user name in PHP?
[ "I think that you are after this\n$username = $_SERVER['PHP_AUTH_USER'];\n$password = $_SERVER['PHP_AUTH_PW'];\n\n" ]
[ 40 ]
[]
[]
[ "apache", "authentication", "http_authentication", "php" ]
stackoverflow_0000001417_apache_authentication_http_authentication_php.txt
Q: Integrating Visual Studio Test Project with Cruise Control I'm looking into using Visual Studio 2008's built in unit test projects instead of NUnit and I was wondering if anyone has any experience in trying to integrate this type of unit test project with Cruise Control.Net. A: From some of the initial research it doesn't appear to be a super simple solution. It appears that doing this involves having Visual Studio 2008 actually installed on the continuous integration server, which could be a deal breaker. Then configure the MSTest.exe to run in the tasks list, but first you'll have to make a batch file to delete the results files from previous passes as this file's existence causes an error. Then create a xslt to format the results and put it into the dashboard.config file. The code project article I found has a lot more detail. Integrating Visual Studio Team System 2008 Unit Tests with CruiseControl.NET
Integrating Visual Studio Test Project with Cruise Control
I'm looking into using Visual Studio 2008's built in unit test projects instead of NUnit and I was wondering if anyone has any experience in trying to integrate this type of unit test project with Cruise Control.Net.
[ "From some of the initial research it doesn't appear to be a super simple solution. \nIt appears that doing this involves having Visual Studio 2008 actually installed on the continuous integration server, which could be a deal breaker.\nThen configure the MSTest.exe to run in the tasks list, but first you'll have to make a batch file to delete the results files from previous passes as this file's existence causes an error.\nThen create a xslt to format the results and put it into the dashboard.config file.\nThe code project article I found has a lot more detail.\nIntegrating Visual Studio Team System 2008 Unit Tests with CruiseControl.NET\n" ]
[ 10 ]
[]
[]
[ "continuous_integration", "cruisecontrol.net", "unit_testing", "visual_studio" ]
stackoverflow_0000001503_continuous_integration_cruisecontrol.net_unit_testing_visual_studio.txt
Q: Register Windows program with the mailto protocol programmatically How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to Delphi, because the answer is independent of your language. A: @Dillie-O: Your answer put me in the right direction (I should have expected it to just be a registry change) and I got this working. But I'm going to mark this as the answer because I'm going to put some additional information that I found while working on this. The solution to this question really doesn't depend on what programming language you're using, as long as there's some way to modify Windows registry settings. Finally, here's the answer: To associate a program with the mailto protocol for all users on a computer, change the HKEY_CLASSES_ROOT\mailto\shell\open\command Default value to: "Your program's executable" "%1" To associate a program with the mailto protocol for the current user, change the HKEY_CURRENT_USER\Software\Classes\mailto\shell\open\command Default value to: "Your program's executable" "%1" The %1 will be replaced with the entire mailto URL. For example, given the link: <a href="mailto:[email protected]">Email me</a> The following will be executed: "Your program's executable" "mailto:[email protected]" Update (via comment by shellscape): As of Windows 8, this method no longer works as expected. Win8 enforces the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\Shell\Associati‌​ons\URLAssociations\‌​MAILTO\UserChoice for which the ProgID of the selected app is hashed and can't be forged. It's a royal PITA. A: From what I've seen, there are a few registry keys that set the default mail client. One of them is: System Key: [HKEY_CLASSES_ROOT\mailto\shell\open\command] Value Name: (Default) Data Type: REG_SZ (String Value) Value Data: Mail program command-line. I'm not familiar with Delphi 7, but I'm sure there are some registry editing libraries there that you could use to modify this value. Some places list more than this key, others just this key, so you may need to test a little bit to find the proper one(s).
Register Windows program with the mailto protocol programmatically
How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to Delphi, because the answer is independent of your language.
[ "@Dillie-O: Your answer put me in the right direction (I should have expected it to just be a registry change) and I got this working. But I'm going to mark this as the answer because I'm going to put some additional information that I found while working on this.\nThe solution to this question really doesn't depend on what programming language you're using, as long as there's some way to modify Windows registry settings.\nFinally, here's the answer:\n\nTo associate a program with the mailto protocol for all users on a computer, change the HKEY_CLASSES_ROOT\\mailto\\shell\\open\\command Default value to:\n\"Your program's executable\" \"%1\"\nTo associate a program with the mailto protocol for the current user, change the HKEY_CURRENT_USER\\Software\\Classes\\mailto\\shell\\open\\command Default value to:\n\"Your program's executable\" \"%1\"\n\nThe %1 will be replaced with the entire mailto URL. For example, given the link:\n<a href=\"mailto:[email protected]\">Email me</a>\n\nThe following will be executed:\n\"Your program's executable\" \"mailto:[email protected]\"\nUpdate (via comment by shellscape):\nAs of Windows 8, this method no longer works as expected. Win8 enforces the following key: HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\Shell\\Associati‌​ons\\URLAssociations\\‌​MAILTO\\UserChoice for which the ProgID of the selected app is hashed and can't be forged. It's a royal PITA.\n", "From what I've seen, there are a few registry keys that set the default mail client. One of them is:\nSystem Key: [HKEY_CLASSES_ROOT\\mailto\\shell\\open\\command]\nValue Name: (Default)\nData Type: REG_SZ (String Value)\nValue Data: Mail program command-line.\nI'm not familiar with Delphi 7, but I'm sure there are some registry editing libraries there that you could use to modify this value.\nSome places list more than this key, others just this key, so you may need to test a little bit to find the proper one(s).\n" ]
[ 19, 13 ]
[]
[]
[ "mailto", "windows" ]
stackoverflow_0000000231_mailto_windows.txt
Q: How do I make a menu that does not require the user to press [enter] to make a selection? I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user. The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far: import sys print """Menu 1) Say Foo 2) Say Bar""" answer = raw_input("Make a selection> ") if "1" in answer: print "foo" elif "2" in answer: print "bar" It would be great to have something like print menu while lastKey = "": lastKey = check_for_recent_keystrokes() if "1" in lastKey: #do stuff... A: On Windows: import msvcrt answer=msvcrt.getch() A: On Linux: set raw mode select and read the keystroke restore normal settings import sys import select import termios import tty def getkey(): old_settings = termios.tcgetattr(sys.stdin) tty.setraw(sys.stdin.fileno()) select.select([sys.stdin], [], [], 0) answer = sys.stdin.read(1) termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings) return answer print """Menu 1) Say Foo 2) Say Bar""" answer=getkey() if "1" in answer: print "foo" elif "2" in answer: print "bar" A: Wow, that took forever. Ok, here's what I've ended up with #!C:\python25\python.exe import msvcrt print """Menu 1) Say Foo 2) Say Bar""" while 1: char = msvcrt.getch() if char == chr(27): #escape break if char == "1": print "foo" break if char == "2": print "Bar" break It fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine. No one try it in IDLE, unless you have Task Manager handy. I've already forgotten how I lived with menus that arn't super-instant responsive. A: The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals. But I guess if you're going to be using this specifically for windows, more power to ya.
How do I make a menu that does not require the user to press [enter] to make a selection?
I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user. The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far: import sys print """Menu 1) Say Foo 2) Say Bar""" answer = raw_input("Make a selection> ") if "1" in answer: print "foo" elif "2" in answer: print "bar" It would be great to have something like print menu while lastKey = "": lastKey = check_for_recent_keystrokes() if "1" in lastKey: #do stuff...
[ "On Windows:\nimport msvcrt\nanswer=msvcrt.getch()\n\n", "On Linux:\n\nset raw mode\nselect and read the keystroke\nrestore normal settings\n\n\nimport sys\nimport select\nimport termios\nimport tty\n\ndef getkey():\n old_settings = termios.tcgetattr(sys.stdin)\n tty.setraw(sys.stdin.fileno())\n select.select([sys.stdin], [], [], 0)\n answer = sys.stdin.read(1)\n termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)\n return answer\n\nprint \"\"\"Menu\n1) Say Foo\n2) Say Bar\"\"\"\n\nanswer=getkey()\n\nif \"1\" in answer: print \"foo\"\nelif \"2\" in answer: print \"bar\"\n\n\n", "Wow, that took forever. Ok, here's what I've ended up with \n#!C:\\python25\\python.exe\nimport msvcrt\nprint \"\"\"Menu\n1) Say Foo \n2) Say Bar\"\"\"\nwhile 1:\n char = msvcrt.getch()\n if char == chr(27): #escape\n break\n if char == \"1\":\n print \"foo\"\n break\n if char == \"2\":\n print \"Bar\"\n break\n\nIt fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.\nNo one try it in IDLE, unless you have Task Manager handy.\nI've already forgotten how I lived with menus that arn't super-instant responsive.\n", "The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.\nBut I guess if you're going to be using this specifically for windows, more power to ya.\n" ]
[ 10, 9, 4, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000001829_python.txt
Q: Can a Windows dll retrieve its own filename? A Windows process created from an exe file has access to the command string which invoked it, including its file's path and filename. eg. C:\MyApp\MyApp.exe --help. But this is not so for a dll invoked via LoadLibrary. Does anyone know of a way for a function loaded via dll to find out what its path and filename is? Specifically I'm interested in a Delphi solution, but I suspect that the answer would be pretty much the same for any language. A: I think you're looking for GetModuleFileName. http://www.swissdelphicenter.ch/torry/showcode.php?id=143: { If you are working on a DLL and are interested in the filename of the DLL rather than the filename of the application, then you can use this function: } function GetModuleName: string; var szFileName: array[0..MAX_PATH] of Char; begin FillChar(szFileName, SizeOf(szFileName), #0); GetModuleFileName(hInstance, szFileName, MAX_PATH); Result := szFileName; end; Untested though, been some time since I worked with Delphi :)
Can a Windows dll retrieve its own filename?
A Windows process created from an exe file has access to the command string which invoked it, including its file's path and filename. eg. C:\MyApp\MyApp.exe --help. But this is not so for a dll invoked via LoadLibrary. Does anyone know of a way for a function loaded via dll to find out what its path and filename is? Specifically I'm interested in a Delphi solution, but I suspect that the answer would be pretty much the same for any language.
[ "I think you're looking for GetModuleFileName.\nhttp://www.swissdelphicenter.ch/torry/showcode.php?id=143:\n{\n If you are working on a DLL and are interested in the filename of the\n DLL rather than the filename of the application, then you can use this function:\n}\n\nfunction GetModuleName: string;\nvar\n szFileName: array[0..MAX_PATH] of Char;\nbegin\n FillChar(szFileName, SizeOf(szFileName), #0);\n GetModuleFileName(hInstance, szFileName, MAX_PATH);\n Result := szFileName;\nend;\n\nUntested though, been some time since I worked with Delphi :)\n" ]
[ 39 ]
[]
[]
[ "delphi", "dll", "winapi", "windows" ]
stackoverflow_0000002043_delphi_dll_winapi_windows.txt
Q: How to get the value of built, encoded ViewState? I need to grab the base64-encoded representation of the ViewState. Obviously, this would not be available until fairly late in the request lifecycle, which is OK. For example, if the output of the page includes: <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUJODU0Njc5MD...==" /> I need a way on the server-side to get the value "/wEPDwUJODU0Njc5MD...==" To clarify, I need this value when the page is being rendered, not on PostBack. e.g. I need to know the ViewState value that is being sent to the client, not the ViewState I'm getting back from them. A: Rex, I suspect a good place to start looking is solutions that compress the ViewState -- they're grabbing ViewState on the server before it's sent down to the client and gzipping it. That's exactly where you want to be. Scott Hanselman on ViewState Compression (2005) ViewState Compression with System.IO.Compression (2007) A: See this blog post where the author describes a method for overriding the default behavior for generating the ViewState and instead shows how to save it on the server Session object. In ASP.NET 2.0, ViewState is saved by a descendant of PageStatePersister class. This class is an abstract class for saving and loading ViewsState and there are two implemented descendants of this class in .Net Framework, named HiddenFieldPageStatePersister and SessionPageStatePersister. By default HiddenFieldPageStatePersister is used to save/load ViewState information, but we can easily get the SessionPageStatePersister to work and save ViewState in Session object. Although I did not test his code, it seems to show exactly what you want: a way to gain access to ViewState code while still on the server, before postback. A: I enabled compression following similar articles to those posted above. The key to accessing the ViewState before the application sends it was overriding this method; protected override void SavePageStateToPersistenceMedium(object viewState) You can call the base method within this override and then add whatever additional logic you require to handle the ViewState.
How to get the value of built, encoded ViewState?
I need to grab the base64-encoded representation of the ViewState. Obviously, this would not be available until fairly late in the request lifecycle, which is OK. For example, if the output of the page includes: <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUJODU0Njc5MD...==" /> I need a way on the server-side to get the value "/wEPDwUJODU0Njc5MD...==" To clarify, I need this value when the page is being rendered, not on PostBack. e.g. I need to know the ViewState value that is being sent to the client, not the ViewState I'm getting back from them.
[ "Rex, I suspect a good place to start looking is solutions that compress the ViewState -- they're grabbing ViewState on the server before it's sent down to the client and gzipping it. That's exactly where you want to be.\n\nScott Hanselman on ViewState Compression (2005)\nViewState Compression with System.IO.Compression (2007)\n\n", "See this blog post where the author describes a method for overriding the default behavior for generating the ViewState and instead shows how to save it on the server Session object.\n\nIn ASP.NET 2.0, ViewState is saved by\n a descendant of PageStatePersister\n class. This class is an abstract class\n for saving and loading ViewsState and\n there are two implemented descendants\n of this class in .Net Framework, named\n HiddenFieldPageStatePersister and\n SessionPageStatePersister. By default\n HiddenFieldPageStatePersister is used\n to save/load ViewState information,\n but we can easily get the\n SessionPageStatePersister to work and\n save ViewState in Session object.\n\nAlthough I did not test his code, it seems to show exactly what you want: a way to gain access to ViewState code while still on the server, before postback. \n", "I enabled compression following similar articles to those posted above. The key to accessing the ViewState before the application sends it was overriding this method;\nprotected override void SavePageStateToPersistenceMedium(object viewState)\n\nYou can call the base method within this override and then add whatever additional logic you require to handle the ViewState.\n" ]
[ 13, 4, 2 ]
[]
[]
[ "asp.net", "c#", "viewstate" ]
stackoverflow_0000001010_asp.net_c#_viewstate.txt
Q: How can I change the background of a masterpage from the code behind of a content page? I specifically want to add the style of background-color to the <body> tag of a master page, from the code behind (C#) of a content page that uses that master page. I have different content pages that need to make the master page has different colors depending on which content page is loaded, so that the master page matches the content page's theme. I have a solution below: I'm looking for something more like: Master.Attributes.Add("style", "background-color: 2e6095"); Inside of the page load function of the content page. But I can't get the above line to work. I only need to change the background-color for the <body> tag of the page. A: What I would do for the particular case is: i. Define the body as a server side control <body runat="server" id="masterpageBody"> ii. In your content aspx page, register the MasterPage with the register: <% MasterPageFile="..." %> iii. In the Content Page, you can now simply use Master.FindControl("masterpageBody") and have access to the control. Now, you can change whatever properties/style that you like! A: This is what I came up with: In the page load function: HtmlGenericControl body = (HtmlGenericControl)Master.FindControl("default_body"); body.Style.Add(HtmlTextWriterStyle.BackgroundColor, "#2E6095"); Where default_body = the id of the body tag. A: I believe you are talking about a content management system. The way I have delt with this situation in the past is to either: Allow a page/content to define an extra custom stylesheet or Allow a page/content to define inline style tags
How can I change the background of a masterpage from the code behind of a content page?
I specifically want to add the style of background-color to the <body> tag of a master page, from the code behind (C#) of a content page that uses that master page. I have different content pages that need to make the master page has different colors depending on which content page is loaded, so that the master page matches the content page's theme. I have a solution below: I'm looking for something more like: Master.Attributes.Add("style", "background-color: 2e6095"); Inside of the page load function of the content page. But I can't get the above line to work. I only need to change the background-color for the <body> tag of the page.
[ "What I would do for the particular case is:\ni. Define the body as a server side control\n<body runat=\"server\" id=\"masterpageBody\">\n\nii. In your content aspx page, register the MasterPage with the register:\n<% MasterPageFile=\"...\" %>\n\niii. In the Content Page, you can now simply use \nMaster.FindControl(\"masterpageBody\")\n\nand have access to the control. Now, you can change whatever properties/style that you like!\n", "This is what I came up with:\nIn the page load function:\nHtmlGenericControl body = (HtmlGenericControl)Master.FindControl(\"default_body\");\nbody.Style.Add(HtmlTextWriterStyle.BackgroundColor, \"#2E6095\");\n\nWhere \n\ndefault_body = the id of the body tag.\n\n", "I believe you are talking about a content management system. The way I have delt with this situation in the past is to either:\n\nAllow a page/content to define an extra custom stylesheet or\nAllow a page/content to define inline style tags\n\n" ]
[ 10, 1, 0 ]
[]
[]
[ ".net", "asp.net", "c#", "master_pages" ]
stackoverflow_0000002209_.net_asp.net_c#_master_pages.txt
Q: Add Custom Tag to Visual Studio Validation How can I add rules to Visual Studio (2005 and up) for validating property markup (HTML) for a vendor's proprietary controls? My client uses a control which requires several properties to be set as tags in the aspx file which generates something like 215 validation errors on each build. It's not preventing me from building, but real errors are getting lost in the noise. A: Right-click on the Source view of an HTML / ASP page and select "Formatting and Validation". Click "Tag Specific Options". Expand "Client HTML Tags" and select the heading. Click "New Tag...". And just fill it in! I wish that I could add custom CSS values as well.
Add Custom Tag to Visual Studio Validation
How can I add rules to Visual Studio (2005 and up) for validating property markup (HTML) for a vendor's proprietary controls? My client uses a control which requires several properties to be set as tags in the aspx file which generates something like 215 validation errors on each build. It's not preventing me from building, but real errors are getting lost in the noise.
[ "Right-click on the Source view of an HTML / ASP page and select \"Formatting and Validation\".\n\nClick \"Tag Specific Options\".\nExpand \"Client HTML Tags\" and select the heading.\nClick \"New Tag...\".\nAnd just fill it in!\n\nI wish that I could add custom CSS values as well.\n" ]
[ 5 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000002279_visual_studio.txt
Q: How do I turn on line numbers by default in TextWrangler on the Mac? I am fed up having to turn them on every time I open the application. A: Go to TextWrangler > Preferences. Choose Text Status Display in the category pane, then check the option "Show line numbers" and close the preferences. This should now be on by default when you open existing documents.
How do I turn on line numbers by default in TextWrangler on the Mac?
I am fed up having to turn them on every time I open the application.
[ "Go to TextWrangler > Preferences.\nChoose Text Status Display in the category pane, then check the option \"Show line numbers\" and close the preferences. This should now be on by default when you open existing documents.\n" ]
[ 28 ]
[]
[]
[ "macos", "textwrangler" ]
stackoverflow_0000002332_macos_textwrangler.txt
Q: How to filter and combine 2 datasets in C# I am building a web page to show a customer what software they purchased and to give them a link to download said software. Unfortunately, the data on what was purchased and the download information are in separate databases so I can't just take care of it with joins in an SQL query. The common item is SKU. I'll be pulling a list of SKUs from the customer purchases database and on the download table is a comma delineated list of SKUs associated with that download. My intention, at the moment, is to create from this one datatable to populate a GridView. Any suggestions on how to do this efficiently would be appreciated. If it helps, I can pretty easily pull back the data as a DataSet or a DataReader, if either one would be better for this purpose. A: As long as the two databases are on the same physical server (assuming MSSQL) and the username/password being used in the connection string has rights to both DBs, then you should be able to perform a join across the two databases. Example: select p.Date, p.Amount, d.SoftwareName, d.DownloadLink from PurchaseDB.dbo.Purchases as p join ProductDB.dbo.Products as d on d.sku = p.sku where p.UserID = 12345 A: Why not create a Domain object based approach to this problem: public class CustomerDownloadInfo { private string sku; private readonly ICustomer customer; public CustomerDownloadInfo(ICustomer Customer){ customer = Customer; } public void AttachSku(string Sku){ sku = Sku; } public string Sku{ get { return sku; } } public string Link{ get{ // etc... etc... } } } There are a million variations on this, but once you aggregate this information, wouldn't it be easier to present? A: I am thinking off the top of my head here. If you load both as Data Tables in the same Data Sets, and define a relation between the two over SKU, and then run a query on the Data Set which produces the desired result.
How to filter and combine 2 datasets in C#
I am building a web page to show a customer what software they purchased and to give them a link to download said software. Unfortunately, the data on what was purchased and the download information are in separate databases so I can't just take care of it with joins in an SQL query. The common item is SKU. I'll be pulling a list of SKUs from the customer purchases database and on the download table is a comma delineated list of SKUs associated with that download. My intention, at the moment, is to create from this one datatable to populate a GridView. Any suggestions on how to do this efficiently would be appreciated. If it helps, I can pretty easily pull back the data as a DataSet or a DataReader, if either one would be better for this purpose.
[ "As long as the two databases are on the same physical server (assuming MSSQL) and the username/password being used in the connection string has rights to both DBs, then you should be able to perform a join across the two databases. Example: \nselect p.Date,\n p.Amount,\n d.SoftwareName,\n d.DownloadLink\nfrom PurchaseDB.dbo.Purchases as p\njoin ProductDB.dbo.Products as d on d.sku = p.sku\nwhere p.UserID = 12345\n\n", "Why not create a Domain object based approach to this problem:\npublic class CustomerDownloadInfo\n{\n private string sku;\n private readonly ICustomer customer;\n\n public CustomerDownloadInfo(ICustomer Customer){\n customer = Customer;\n }\n\n public void AttachSku(string Sku){\n sku = Sku;\n }\n\n public string Sku{\n get { return sku; }\n }\n\n public string Link{\n get{ \n // etc... etc... \n }\n }\n}\n\nThere are a million variations on this, but once you aggregate this information, wouldn't it be easier to present?\n", "I am thinking off the top of my head here. If you load both as Data Tables in the same Data Sets, and define a relation between the two over SKU, and then run a query on the Data Set which produces the desired result.\n" ]
[ 3, 2, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000002267_.net_c#.txt
Q: Binary file layout reference Where are some good sources of information on binary file layout structures? If I wanted to pull in a BTrieve index file, parse MP3 headers, etc. Where does one get reliable information? A: I'm not sure if there's a general information source for this kind of information. I always just search on google or wikipedia for that particular file type. The binary file layout structure information should be included. For example, http://en.wikipedia.org/wiki/MP3#File_structure">MP3 file layout structure
Binary file layout reference
Where are some good sources of information on binary file layout structures? If I wanted to pull in a BTrieve index file, parse MP3 headers, etc. Where does one get reliable information?
[ "I'm not sure if there's a general information source for this kind of information. I always just search on google or wikipedia for that particular file type. The binary file layout structure information should be included.\nFor example, http://en.wikipedia.org/wiki/MP3#File_structure\">MP3 file layout structure\n" ]
[ 1 ]
[]
[]
[ "binary", "data_structures", "file", "language_agnostic" ]
stackoverflow_0000002405_binary_data_structures_file_language_agnostic.txt
Q: How to map a latitude/longitude to a distorted map? I have a bunch of latitude/longitude pairs that map to known x/y coordinates on a (geographically distorted) map. Then I have one more latitude/longitude pair. I want to plot it on the map as best is possible. How do I go about doing this? At first I decided to create a system of linear equations for the three nearest lat/long points and compute a transformation from these, but this doesn't work well at all. Since that's a linear system, I can't use more nearby points either. You can't assume North is up: all you have is the existing lat/long->x/y mappings. EDIT: it's not a Mercator projection, or anything like that. It's arbitrarily distorted for readability (think subway map). I want to use only the nearest 5 to 10 mappings so that distortion on other parts of the map doesn't affect the mapping I'm trying to compute. Further, the entire map is in a very small geographical area so there's no need to worry about the globe--flat-earth assumptions are good enough. A: Are there any more specific details on the kind of distortion? If, for example, your latitudes and longitudes are "distorted" onto your 2D map using a Mercator projection, the conversion math is readily available. If the map is distorted truly arbitrarily, there are lots of things you could try, but the simplest would probably be to compute a weighted average from your existing point mappings. Your weights could be the squared inverse of the x/y distance from your new point to each of your existing points. Some pseudocode: estimate-latitude-longitude (x, y) numerator-latitude := 0 numerator-longitude := 0 denominator := 0 for each point, deltaX := x - point.x deltaY := y - point.y distSq := deltaX * deltaX + deltaY * deltaY weight := 1 / distSq numerator-latitude += weight * point.latitude numerator-longitude += weight * point.longitude denominator += weight return (numerator-latitude / denominator, numerator-longitude / denominator) This code will give a relatively simple approximation. If you can be more precise about the way the projection distorts the geographical coordinates, you can probably do much better. A: Alright. From a theoretical point of view, given that the distortion is "arbitrary", and any solution requires you to model this arbitrary distortion, you obviously can't get an "answer". However, any solution is going to involve imposing (usually implicitly) some model of the distortion that may or may not reflect the reality of the situation. Since you seem to be most interested in models that presume some sort of local continuity of the distortion mapping, the most obvious choice is the one you've already tried: linear interpolaton between the nearest points. Going beyond that is going to require more sophisticated mathematical and numerical analysis knowledge. You are incorrect, however, in presuming you cannot expand this to more points. You can by using a least-squared error approach. Find the linear answer that minimizes the error of the other points. This is probably the most straight-forward extension. In other words, take the 5 nearest points and try to come up with a linear approximation that minimizes the error of those points. And use that. I would try this next. If that doesn't work, then the assumption of linearity over the area of N points is broken. At that point you'll need to upgrade to either a quadratic or cubic model. The math is going to get hectic at that point. A: the problem is that the sphere can be distorted a number of ways, and having all those points known on the equator, lets say, wont help you map points further away. You need better 'close' points, then you can assume these three points are on a plane with the fourth and do the interpolation --knowing that the distance of longitudes is a function, not a constant. A: Ummm. Maybe I am missing something about the question here, but if you have long/lat info, you also have the direction of north? It seems you need to map geodesic coordinates to a projected coordinates system. For example osgb to wgs84. The maths involved is non-trivial, but the code comes out a only a few lines. If I had more time I'd post more but I need a shower so I will be boring and link to the wikipedia entry which is pretty good. Note: Post shower edited.
How to map a latitude/longitude to a distorted map?
I have a bunch of latitude/longitude pairs that map to known x/y coordinates on a (geographically distorted) map. Then I have one more latitude/longitude pair. I want to plot it on the map as best is possible. How do I go about doing this? At first I decided to create a system of linear equations for the three nearest lat/long points and compute a transformation from these, but this doesn't work well at all. Since that's a linear system, I can't use more nearby points either. You can't assume North is up: all you have is the existing lat/long->x/y mappings. EDIT: it's not a Mercator projection, or anything like that. It's arbitrarily distorted for readability (think subway map). I want to use only the nearest 5 to 10 mappings so that distortion on other parts of the map doesn't affect the mapping I'm trying to compute. Further, the entire map is in a very small geographical area so there's no need to worry about the globe--flat-earth assumptions are good enough.
[ "Are there any more specific details on the kind of distortion? If, for example, your latitudes and longitudes are \"distorted\" onto your 2D map using a Mercator projection, the conversion math is readily available.\nIf the map is distorted truly arbitrarily, there are lots of things you could try, but the simplest would probably be to compute a weighted average from your existing point mappings. Your weights could be the squared inverse of the x/y distance from your new point to each of your existing points.\nSome pseudocode:\nestimate-latitude-longitude (x, y)\n\n numerator-latitude := 0\n numerator-longitude := 0\n denominator := 0\n\n for each point,\n deltaX := x - point.x\n deltaY := y - point.y\n distSq := deltaX * deltaX + deltaY * deltaY\n weight := 1 / distSq\n\n numerator-latitude += weight * point.latitude\n numerator-longitude += weight * point.longitude\n denominator += weight\n\n return (numerator-latitude / denominator, numerator-longitude / denominator)\n\nThis code will give a relatively simple approximation. If you can be more precise about the way the projection distorts the geographical coordinates, you can probably do much better.\n", "Alright. From a theoretical point of view, given that the distortion is \"arbitrary\", and any solution requires you to model this arbitrary distortion, you obviously can't get an \"answer\". However, any solution is going to involve imposing (usually implicitly) some model of the distortion that may or may not reflect the reality of the situation.\nSince you seem to be most interested in models that presume some sort of local continuity of the distortion mapping, the most obvious choice is the one you've already tried: linear interpolaton between the nearest points. Going beyond that is going to require more sophisticated mathematical and numerical analysis knowledge.\nYou are incorrect, however, in presuming you cannot expand this to more points. You can by using a least-squared error approach. Find the linear answer that minimizes the error of the other points. This is probably the most straight-forward extension. In other words, take the 5 nearest points and try to come up with a linear approximation that minimizes the error of those points. And use that. I would try this next.\nIf that doesn't work, then the assumption of linearity over the area of N points is broken. At that point you'll need to upgrade to either a quadratic or cubic model. The math is going to get hectic at that point.\n", "the problem is that the sphere can be distorted a number of ways, and having all those points known on the equator, lets say, wont help you map points further away.\nYou need better 'close' points, then you can assume these three points are on a plane with the fourth and do the interpolation --knowing that the distance of longitudes is a function, not a constant.\n", "Ummm. Maybe I am missing something about the question here, but if you have long/lat info, you also have the direction of north?\nIt seems you need to map geodesic coordinates to a projected coordinates system. For example osgb to wgs84.\nThe maths involved is non-trivial, but the code comes out a only a few lines. If I had more time I'd post more but I need a shower so I will be boring and link to the wikipedia entry which is pretty good.\nNote: Post shower edited.\n" ]
[ 8, 2, 0, 0 ]
[]
[]
[ "latitude_longitude", "mapping", "maps", "math" ]
stackoverflow_0000001908_latitude_longitude_mapping_maps_math.txt
Q: File size differences after copying a file to a server vía FTP I have created a PHP-script to update a web server that is live inside a local directory. I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server. Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put. from ftplib import FTP ftpserver = "myserver" ftpuser = "myuser" ftppass = "mypwd" locfile = "g:/test/style.css" ftpfile = "/temp/style.css" try: ftp = FTP(ftpserver, ftpuser, ftppass) except: exit ("Cannot connect") f = open (locfile, "r") try: ftp.delete (ftpfile) except: pass # ftp.sendcmd ("TYPE I") # ftp.storlines("STOR %s" % ftpfile, f) ftp.storbinary("STOR %s" % ftpfile, f) f.close() ftp.dir (ftpfile) ftp.quit() Any suggestions? A: Do you need to open the locfile in binary using rb? f = open (locfile, "rb") A: Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled. So you might be comparing the sector file size to the actual file size on the FTP server or vice versa. A: Small files take up a whole node on the file system whatever the size is. My host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.
File size differences after copying a file to a server vía FTP
I have created a PHP-script to update a web server that is live inside a local directory. I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server. Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put. from ftplib import FTP ftpserver = "myserver" ftpuser = "myuser" ftppass = "mypwd" locfile = "g:/test/style.css" ftpfile = "/temp/style.css" try: ftp = FTP(ftpserver, ftpuser, ftppass) except: exit ("Cannot connect") f = open (locfile, "r") try: ftp.delete (ftpfile) except: pass # ftp.sendcmd ("TYPE I") # ftp.storlines("STOR %s" % ftpfile, f) ftp.storbinary("STOR %s" % ftpfile, f) f.close() ftp.dir (ftpfile) ftp.quit() Any suggestions?
[ "Do you need to open the locfile in binary using rb?\nf = open (locfile, \"rb\")\n\n", "Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled.\nSo you might be comparing the sector file size to the actual file size on the FTP server or vice versa.\n", "Small files take up a whole node on the file system whatever the size is.\nMy host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.\n" ]
[ 17, 3, 0 ]
[]
[]
[ "ftp", "ftplib", "php", "python", "webserver" ]
stackoverflow_0000002311_ftp_ftplib_php_python_webserver.txt
Q: Decoding T-SQL CAST in C#/VB.NET Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL  Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text)); ...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++; if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } } txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it? A: I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget: Convert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber))); From there I simply made a loop to go through all the characters 2 by 2 and get them "hexified" and then translated to a string. To Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need. A: Try removing the 0x first and then call Encoding.UTF8.GetString. I think that may work. Essentially: 0x44004500 Remove the 0x, and then always two bytes are one character: 44 00 = D 45 00 = E 6F 00 = o 72 00 = r So it's definitely a Unicode/UTF format with two bytes/character.
Decoding T-SQL CAST in C#/VB.NET
Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL  Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text)); ...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++; if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } } txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it?
[ "I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget:\nConvert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber)));\n\nFrom there I simply made a loop to go through all the characters 2 by 2 and get them \"hexified\" and then translated to a string.\nTo Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need.\n", "Try removing the 0x first and then call Encoding.UTF8.GetString. I think that may work.\nEssentially: 0x44004500\nRemove the 0x, and then always two bytes are one character:\n44 00 = D\n\n45 00 = E\n\n6F 00 = o\n\n72 00 = r\n\nSo it's definitely a Unicode/UTF format with two bytes/character.\n" ]
[ 24, 8 ]
[]
[]
[ "ascii", "c#", "hex", "sql", "vb.net" ]
stackoverflow_0000000109_ascii_c#_hex_sql_vb.net.txt
Q: What sites offer free, quality web site design templates? Let's aggregate a list of free quality web site design templates. There are a million of these sites out there, but most are repetitive and boring. I'll start with freeCSStemplates.org I also think other sites should follow some sort of standards, for example here are freeCSStemplates standards Released for FREE under the Creative Commons Attribution 2.5 license Very lightweight in terms of images Tables-free (ie. they use no tables for layout purposes) W3C standards compliant and valid (XHTML Strict) Provided with public domain photos, generously provided by PDPhoto.org and Wikimedia Commons A: Check out: Open Source Web Designs CSS Remix Best Web Gallery CSS Based CSS Beauty CSS Genius A: The Open Design Community is a great resource. A: http://www.csszengarden.com/ The images are not Creative Commons, but the CSS is. A: +1 for Zen garden. I like the resources at inobscuro.com A: http://www.opensourcetemplates.org/ has nice designs, just not enough selection.
What sites offer free, quality web site design templates?
Let's aggregate a list of free quality web site design templates. There are a million of these sites out there, but most are repetitive and boring. I'll start with freeCSStemplates.org I also think other sites should follow some sort of standards, for example here are freeCSStemplates standards Released for FREE under the Creative Commons Attribution 2.5 license Very lightweight in terms of images Tables-free (ie. they use no tables for layout purposes) W3C standards compliant and valid (XHTML Strict) Provided with public domain photos, generously provided by PDPhoto.org and Wikimedia Commons
[ "Check out:\n\nOpen Source Web Designs\nCSS Remix\nBest Web Gallery\nCSS Based\nCSS Beauty\nCSS Genius\n\n", "The Open Design Community is a great resource.\n", "http://www.csszengarden.com/\nThe images are not Creative Commons, but the CSS is.\n", "+1 for Zen garden.\nI like the resources at inobscuro.com\n", "http://www.opensourcetemplates.org/ has nice designs, just not enough selection.\n" ]
[ 12, 3, 2, 0, 0 ]
[]
[]
[ "css", "templates" ]
stackoverflow_0000002711_css_templates.txt
Q: Is there a keyboard shortcut to view all open documents in Visual Studio 2008 I am trying to learn the keyboard shortcuts in Visual Studio in order to be more productive. So I downloaded a document showing many of the default keybindings in Visual Basic when using the VS 2008 IDE from Microsoft. When I tried what they say is the keyboard shortcut to view all open documents (CTRL + ALT + DOWN ARROW), I got a completely unexpected result on my XP machine; my entire screen display was flipped upside down! Was this a prank by someone at Microsoft? I can't imagine what practical value this flipping of the screen would have. Does anyone know what the correct keyboard shortcut is to view all open documents in VS 2008? Oh and if you try the above shortcut and it flips your display the way it did mine, do a CTRL + ALT + UP ARROW to switch it back. A: This is a conflict between your graphics driver and Visual Studio. Go to your driver settings page (Control panel) and disable the display rotation shortcuts. With this conflict removed, the shortcut will work in Visual Studio.
Is there a keyboard shortcut to view all open documents in Visual Studio 2008
I am trying to learn the keyboard shortcuts in Visual Studio in order to be more productive. So I downloaded a document showing many of the default keybindings in Visual Basic when using the VS 2008 IDE from Microsoft. When I tried what they say is the keyboard shortcut to view all open documents (CTRL + ALT + DOWN ARROW), I got a completely unexpected result on my XP machine; my entire screen display was flipped upside down! Was this a prank by someone at Microsoft? I can't imagine what practical value this flipping of the screen would have. Does anyone know what the correct keyboard shortcut is to view all open documents in VS 2008? Oh and if you try the above shortcut and it flips your display the way it did mine, do a CTRL + ALT + UP ARROW to switch it back.
[ "This is a conflict between your graphics driver and Visual Studio. Go to your driver settings page (Control panel) and disable the display rotation shortcuts. With this conflict removed, the shortcut will work in Visual Studio.\n" ]
[ 15 ]
[]
[]
[ "keyboard", "shortcut", "visual_studio" ]
stackoverflow_0000002765_keyboard_shortcut_visual_studio.txt
Q: Can't get a Console to VMs I've followed this otherwise excellent tutorial on getting Xen working with Ubuntu but am not able to get a console into my virtual machine (domU). I've got the extra = '2 console=xvc0' line in my /etc/xen/hostname_here.cfg file like they say, but am not able to get a console on it. If I statically assign an IP to the VM I can SSH to it, but for now I need to be able to use DHCP to give it an address (and since that's what I'm trying to debug, there's the problem). I know I've got a free DHCP address (although I'm getting more at the moment), so I don't think that's the problem. I've looked on Google and the Xen forums to no avail as well. Any ideas? A: I had followed a different tutorial on setting up my xen on ubuntu before 8.04 but now upgraded to 8.04. I used the extra line in my cfg as folows: extra = ' TERM=xterm xencons=tty console=tty1' It allows me to "xm console hostname" from dom0. I think this was from a problem with the xen setup in the version prior to 8.04 (I'm not sure which version that was). I'm not sure if the same change is necessary in 8.04 as I'm an upgrade and didn't change any of my domU configs after the upgrade.
Can't get a Console to VMs
I've followed this otherwise excellent tutorial on getting Xen working with Ubuntu but am not able to get a console into my virtual machine (domU). I've got the extra = '2 console=xvc0' line in my /etc/xen/hostname_here.cfg file like they say, but am not able to get a console on it. If I statically assign an IP to the VM I can SSH to it, but for now I need to be able to use DHCP to give it an address (and since that's what I'm trying to debug, there's the problem). I know I've got a free DHCP address (although I'm getting more at the moment), so I don't think that's the problem. I've looked on Google and the Xen forums to no avail as well. Any ideas?
[ "I had followed a different tutorial on setting up my xen on ubuntu before 8.04 but now upgraded to 8.04. I used the extra line in my cfg as folows:\nextra = ' TERM=xterm xencons=tty console=tty1'\n\nIt allows me to \"xm console hostname\" from dom0. I think this was from a problem with the xen setup in the version prior to 8.04 (I'm not sure which version that was). I'm not sure if the same change is necessary in 8.04 as I'm an upgrade and didn't change any of my domU configs after the upgrade.\n" ]
[ 5 ]
[]
[]
[ "ubuntu", "virtualization", "xen" ]
stackoverflow_0000002773_ubuntu_virtualization_xen.txt
Q: How to curl or wget a web page? I would like to make a nightly cron job that fetches my stackoverflow page and diffs it from the previous day's page, so I can see a change summary of my questions, answers, ranking, etc. Unfortunately, I couldn't get the right set of cookies, etc, to make this work. Any ideas? Also, when the beta is finished, will my status page be accessible without logging in? A: Your status page is available now without logging in (click logout and try it). When the beta-cookie is disabled, there will be nothing between you and your status page. For wget: wget --no-cookies --header "Cookie: soba=(LookItUpYourself)" https://stackoverflow.com/users/30/myProfile.html A: From Mark Harrison And here's what works... curl -s --cookie soba=. https://stackoverflow.com/users And for wget: wget --no-cookies --header "Cookie: soba=(LookItUpYourself)" https://stackoverflow.com/users/30/myProfile.html A: Nice idea :) I presume you've used wget's --load-cookies (filename) might help a little but it might be easier to use something like Mechanize (in Perl or python) to mimic a browser more fully to get a good spider. A: I couldn't figure out how to get the cookies to work either, but I was able to get to my status page in my browser while I was logged out, so I assume this will work once stackoverflow goes public. This is an interesting idea, but won't you also pick up diffs of the underlying html code? Do you have a strategy to avoid ending up with a diff of the html and not the actual content? A: And here's what works... curl -s --cookie soba=. http://stackoverflow.com/users
How to curl or wget a web page?
I would like to make a nightly cron job that fetches my stackoverflow page and diffs it from the previous day's page, so I can see a change summary of my questions, answers, ranking, etc. Unfortunately, I couldn't get the right set of cookies, etc, to make this work. Any ideas? Also, when the beta is finished, will my status page be accessible without logging in?
[ "Your status page is available now without logging in (click logout and try it). When the beta-cookie is disabled, there will be nothing between you and your status page.\nFor wget:\nwget --no-cookies --header \"Cookie: soba=(LookItUpYourself)\" https://stackoverflow.com/users/30/myProfile.html\n\n", "From Mark Harrison\n\nAnd here's what works...\ncurl -s --cookie soba=. https://stackoverflow.com/users\n\nAnd for wget:\nwget --no-cookies --header \"Cookie: soba=(LookItUpYourself)\" https://stackoverflow.com/users/30/myProfile.html\n\n", "Nice idea :)\nI presume you've used wget's\n--load-cookies (filename)\n\nmight help a little but it might be easier to use something like Mechanize (in Perl or python) to mimic a browser more fully to get a good spider.\n", "I couldn't figure out how to get the cookies to work either, but I was able to get to my status page in my browser while I was logged out, so I assume this will work once stackoverflow goes public.\nThis is an interesting idea, but won't you also pick up diffs of the underlying html code? Do you have a strategy to avoid ending up with a diff of the html and not the actual content?\n", "And here's what works...\ncurl -s --cookie soba=. http://stackoverflow.com/users\n\n" ]
[ 9, 6, 3, 2, 2 ]
[]
[]
[ "curl", "http" ]
stackoverflow_0000002815_curl_http.txt
Q: Using ASP.NET Dynamic Data / LINQ to SQL, how do you have two table fields have a relationship to the same foreign key? I am using ASP.NET Dynamic Data for a project and I have a table that has two seperate fields that link to the same foreign key in a different table. This relationship works fine in SQL Server. However, in the LINQ to SQL model in the ASP.NET Dynamic Data model, only the first field's relationship is reflected. If I attempt to add the second relationship manually, it complains that it "Cannot create an association "ForeignTable_BaseTable". The same property is listed more than once: "Id"." This MSDN article gives such helpful advice as: Examine the message and note the property specified in the message. Click OK to dismiss the message box. Inspect the Association Properties and remove the duplicate entries. Click OK. A: The solution is to delete and re-add BOTH tables to the LINQ to SQL diagram, not just the one you have added the second field and keys to. Alternatively, it appears you can make two associations using the LINQ to SQL interface - just don't try and bundle them into a single association.
Using ASP.NET Dynamic Data / LINQ to SQL, how do you have two table fields have a relationship to the same foreign key?
I am using ASP.NET Dynamic Data for a project and I have a table that has two seperate fields that link to the same foreign key in a different table. This relationship works fine in SQL Server. However, in the LINQ to SQL model in the ASP.NET Dynamic Data model, only the first field's relationship is reflected. If I attempt to add the second relationship manually, it complains that it "Cannot create an association "ForeignTable_BaseTable". The same property is listed more than once: "Id"." This MSDN article gives such helpful advice as: Examine the message and note the property specified in the message. Click OK to dismiss the message box. Inspect the Association Properties and remove the duplicate entries. Click OK.
[ "The solution is to delete and re-add BOTH tables to the LINQ to SQL diagram, not just the one you have added the second field and keys to.\nAlternatively, it appears you can make two associations using the LINQ to SQL interface - just don't try and bundle them into a single association.\n" ]
[ 5 ]
[]
[]
[ "asp.net", "dynamic_data" ]
stackoverflow_0000003004_asp.net_dynamic_data.txt
Q: How can you tell when a user last pressed a key (or moved the mouse)? In a Win32 environment, you can use the GetLastInputInfo API call in Microsoft documentation. Basically, this method returns the last tick that corresponds with when the user last provided input, and you have to compare that to the current tick to determine how long ago that was. Xavi23cr has a good example for C# at codeproject. Any suggestions for other environments? A: As for Linux, I know that Pidgin has to determine idle time to change your status to away after a certain amount of time. You might open the source and see if you can find the code that does what you need it to do. A: You seem to have answered your own question there Nathan ;-) "GetLastInputInfo" is the way to go. One trick is that if your application is running on the desktop, and the user connects to a virtual machine, then GetLastInputInfo will report no activity (since there is no activity on the host machine). This can be different to the behaviour you want, depending on how you wish to apply the user input.
How can you tell when a user last pressed a key (or moved the mouse)?
In a Win32 environment, you can use the GetLastInputInfo API call in Microsoft documentation. Basically, this method returns the last tick that corresponds with when the user last provided input, and you have to compare that to the current tick to determine how long ago that was. Xavi23cr has a good example for C# at codeproject. Any suggestions for other environments?
[ "As for Linux, I know that Pidgin has to determine idle time to change your status to away after a certain amount of time. You might open the source and see if you can find the code that does what you need it to do.\n", "You seem to have answered your own question there Nathan ;-)\n\"GetLastInputInfo\" is the way to go.\nOne trick is that if your application is running on the desktop, and the user connects to a virtual machine, then GetLastInputInfo will report no activity (since there is no activity on the host machine).\nThis can be different to the behaviour you want, depending on how you wish to apply the user input.\n" ]
[ 3, 1 ]
[]
[]
[ "language_agnostic" ]
stackoverflow_0000002709_language_agnostic.txt
Q: How should I translate from screen space coordinates to image space coordinates in a WinForms PictureBox? I have an application that displays an image inside of a Windows Forms PictureBox control. The SizeMode of the control is set to Zoom so that the image contained in the PictureBox will be displayed in an aspect-correct way regardless of the dimensions of the PictureBox. This is great for the visual appearance of the application because you can size the window however you want and the image will always be displayed using its best fit. Unfortunately, I also need to handle mouse click events on the picture box and need to be able to translate from screen-space coordinates to image-space coordinates. It looks like it's easy to translate from screen space to control space, but I don't see any obvious way to translate from control space to image space (i.e. the pixel coordinate in the source image that has been scaled in the picture box). Is there an easy way to do this, or should I just duplicate the scaling math that they're using internally to position the image and do the translation myself? A: I wound up just implementing the translation manually. The code's not too bad, but it did leave me wishing that they provided support for it directly. I could see such a method being useful in a lot of different circumstances. I guess that's why they added extension methods :) In pseudocode: // Recompute the image scaling the zoom mode uses to fit the image on screen imageScale ::= min(pictureBox.width / image.width, pictureBox.height / image.height) scaledWidth ::= image.width * imageScale scaledHeight ::= image.height * imageScale // Compute the offset of the image to center it in the picture box imageX ::= (pictureBox.width - scaledWidth) / 2 imageY ::= (pictureBox.height - scaledHeight) / 2 // Test the coordinate in the picture box against the image bounds if pos.x < imageX or imageX + scaledWidth < pos.x then return null if pos.y < imageY or imageY + scaledHeight < pos.y then return null // Compute the normalized (0..1) coordinates in image space u ::= (pos.x - imageX) / imageScale v ::= (pos.y - imageY) / imageScale return (u, v) To get the pixel position in the image, you'd just multiply by the actual image pixel dimensions, but the normalized coordinates allow you to address the original responder's point about resolving ambiguity on a case-by-case basis. A: Depending on the scaling, the relative image pixel could be anywhere in a number of pixels. For example, if the image is scaled down significantly, pixel 2, 10 could represent 2, 10 all the way up to 20, 100), so you'll have to do the math yourself and take full responsibility for any inaccuracies! :-)
How should I translate from screen space coordinates to image space coordinates in a WinForms PictureBox?
I have an application that displays an image inside of a Windows Forms PictureBox control. The SizeMode of the control is set to Zoom so that the image contained in the PictureBox will be displayed in an aspect-correct way regardless of the dimensions of the PictureBox. This is great for the visual appearance of the application because you can size the window however you want and the image will always be displayed using its best fit. Unfortunately, I also need to handle mouse click events on the picture box and need to be able to translate from screen-space coordinates to image-space coordinates. It looks like it's easy to translate from screen space to control space, but I don't see any obvious way to translate from control space to image space (i.e. the pixel coordinate in the source image that has been scaled in the picture box). Is there an easy way to do this, or should I just duplicate the scaling math that they're using internally to position the image and do the translation myself?
[ "I wound up just implementing the translation manually. The code's not too bad, but it did leave me wishing that they provided support for it directly. I could see such a method being useful in a lot of different circumstances.\nI guess that's why they added extension methods :)\nIn pseudocode:\n// Recompute the image scaling the zoom mode uses to fit the image on screen\nimageScale ::= min(pictureBox.width / image.width, pictureBox.height / image.height)\n\nscaledWidth ::= image.width * imageScale\nscaledHeight ::= image.height * imageScale\n\n// Compute the offset of the image to center it in the picture box\nimageX ::= (pictureBox.width - scaledWidth) / 2\nimageY ::= (pictureBox.height - scaledHeight) / 2\n\n// Test the coordinate in the picture box against the image bounds\nif pos.x < imageX or imageX + scaledWidth < pos.x then return null\nif pos.y < imageY or imageY + scaledHeight < pos.y then return null\n\n// Compute the normalized (0..1) coordinates in image space\nu ::= (pos.x - imageX) / imageScale\nv ::= (pos.y - imageY) / imageScale\nreturn (u, v)\n\nTo get the pixel position in the image, you'd just multiply by the actual image pixel dimensions, but the normalized coordinates allow you to address the original responder's point about resolving ambiguity on a case-by-case basis.\n", "Depending on the scaling, the relative image pixel could be anywhere in a number of pixels. For example, if the image is scaled down significantly, pixel 2, 10 could represent 2, 10 all the way up to 20, 100), so you'll have to do the math yourself and take full responsibility for any inaccuracies! :-)\n" ]
[ 6, 2 ]
[]
[]
[ "c#", "picturebox", "winforms" ]
stackoverflow_0000002804_c#_picturebox_winforms.txt
Q: 'Best' Diff Algorithm I need to implement a Diff algorithm in VB.NET to find the changes between two different versions of a piece of text. I've had a scout around the web and have found a couple of different algorithms. Does anybody here know of a 'best' algorithm that I could implement? A: Well I've used the c# version on codeproject and its really good for what I wanted... http://www.codeproject.com/KB/recipes/diffengine.aspx You can probably get this translated into VB.net via an online converter if you can't do it yourself... A: I like An O(ND) Difference Algorithm and Its Variations by Eugene Myers. I believe it's the algorithm that was used in GNU diff. For a good background see Wikipedia. This is quite theoretical and you might wish to find source code, but I'm not aware of any in VB. A: I don't know for sure if it's the best diff algorithms but you might want to check out those links that talks about SOCT4 and SOCT6 http://dev.libresource.org/home/doc/so6-user-manual/concepts and also: http://www.loria.fr/~molli/pmwiki/uploads/Main/so6group03.pdf http://www.loria.fr/~molli/pmwiki/uploads/Main/diffalgo.pdf
'Best' Diff Algorithm
I need to implement a Diff algorithm in VB.NET to find the changes between two different versions of a piece of text. I've had a scout around the web and have found a couple of different algorithms. Does anybody here know of a 'best' algorithm that I could implement?
[ "Well I've used the c# version on codeproject and its really good for what I wanted...\nhttp://www.codeproject.com/KB/recipes/diffengine.aspx\nYou can probably get this translated into VB.net via an online converter if you can't do it yourself...\n", "I like An O(ND) Difference Algorithm and Its Variations by Eugene Myers. I believe it's the algorithm that was used in GNU diff. For a good background see Wikipedia. \nThis is quite theoretical and you might wish to find source code, but I'm not aware of any in VB.\n", "I don't know for sure if it's the best diff algorithms but you might want to check out those links that talks about SOCT4 and SOCT6\nhttp://dev.libresource.org/home/doc/so6-user-manual/concepts\nand also:\nhttp://www.loria.fr/~molli/pmwiki/uploads/Main/so6group03.pdf\nhttp://www.loria.fr/~molli/pmwiki/uploads/Main/diffalgo.pdf\n" ]
[ 7, 7, 3 ]
[]
[]
[ "diff", "vb.net" ]
stackoverflow_0000003144_diff_vb.net.txt
Q: Is there an Unobtrusive Captcha for web forms? What is the best unobtrusive CAPTCHA for web forms? One that does not involve a UI, rather a non-UI Turing test. I have seen a simple example of a non UI CAPTCHA like the Nobot control from Microsoft. I am looking for a CAPTCHA that does not ask the user any question in any form. No riddles, no what's in this image. A: I think you might be alluding to an "invisible" captcha. Check out the Subkismet project for an invisible captcha implementation. http://www.codeplex.com/subkismet A: Try akismet from wp guys A: I think asking the user simple questions like: "How many legs does a dog have?" Would be much more effective that any CAPTCHA systems out there at the moment. Not only is it very difficult for the computer to answer that question, but it is very easy for a human to answer! A: Eric Meyer implemented a very similar thing as a WordPress plugin called WP-GateKeeper that asks human-readable questions like "What colour is an orange?". He did have some issues around asking questions that a non-native English speaker would be able to answer simply, though. There are a few posts on his blog about it. A: @KP After your update to the original question, the only real option available to you is to do some jiggery-pokery in Javascript on the client. The only issue with that would be provicing graceful degredation for non-javascript enabled clients. e.g. You could add some AJAX-y goodness that reads a hidden form filed value, requests a verification key from the server, and sends that back along with the response, but that will never be populated if javascript is blocked/disabled. You could always implement a more traditional captcha type interface which could be disabled by javascript, and ignored by the server if the scripted field if filled in... Depends how far you want to go with it, though. Good luck
Is there an Unobtrusive Captcha for web forms?
What is the best unobtrusive CAPTCHA for web forms? One that does not involve a UI, rather a non-UI Turing test. I have seen a simple example of a non UI CAPTCHA like the Nobot control from Microsoft. I am looking for a CAPTCHA that does not ask the user any question in any form. No riddles, no what's in this image.
[ "I think you might be alluding to an \"invisible\" captcha. Check out the Subkismet project for an invisible captcha implementation.\nhttp://www.codeplex.com/subkismet\n", "Try akismet from wp guys \n", "I think asking the user simple questions like:\n\"How many legs does a dog have?\"\nWould be much more effective that any CAPTCHA systems out there at the moment. Not only is it very difficult for the computer to answer that question, but it is very easy for a human to answer!\n", "Eric Meyer implemented a very similar thing as a WordPress plugin called WP-GateKeeper that asks human-readable questions like \"What colour is an orange?\". He did have some issues around asking questions that a non-native English speaker would be able to answer simply, though. \nThere are a few posts on his blog about it.\n", "@KP\nAfter your update to the original question, the only real option available to you is to do some jiggery-pokery in Javascript on the client. The only issue with that would be provicing graceful degredation for non-javascript enabled clients. \ne.g. You could add some AJAX-y goodness that reads a hidden form filed value, requests a verification key from the server, and sends that back along with the response, but that will never be populated if javascript is blocked/disabled. You could always implement a more traditional captcha type interface which could be disabled by javascript, and ignored by the server if the scripted field if filled in...\nDepends how far you want to go with it, though. Good luck\n" ]
[ 4, 4, 2, 1, 1 ]
[]
[]
[ "captcha", "security", "usability" ]
stackoverflow_0000003027_captcha_security_usability.txt
Q: How can I modify .xfdl files? (Update #1) The .XFDL file extension identifies XFDL Formatted Document files. These belong to the XML-based document and template formatting standard. This format is exactly like the XML file format however, contains a level of encryption for use in secure communications. I know how to view XFDL files using a file viewer I found here. I can also modify and save these files by doing File:Save/Save As. I'd like, however, to modify these files on the fly. Any suggestions? Is this even possible? Update #1: I have now successfully decoded and unziped a .xfdl into an XML file which I can then edit. Now, I am looking for a way to re-encode the modified XML file back into base64-gzip (using Ruby or the command line) A: If the encoding is base64 then this is the solution I've stumbled upon on the web: "Decoding XDFL files saved with 'encoding=base64'. Files saved with: application/vnd.xfdl;content-encoding="base64-gzip" are simple base64-encoded gzip files. They can be easily restored to XML by first decoding and then unzipping them. This can be done as follows on Ubuntu: sudo apt-get install uudeview uudeview -i yourform.xfdl gunzip -S "" < UNKNOWN.001 > yourform-unpacked.xfdl The first command will install uudeview, a package that can decode base64, among others. You can skip this step once it is installed. Assuming your form is saved as 'yourform.xfdl', the uudeview command will decode the contents as 'UNKNOWN.001', since the xfdl file doesn't contain a file name. The '-i' option makes uudeview uninteractive, remove that option for more control. The last command gunzips the decoded file into a file named 'yourform-unpacked.xfdl'. Another possible solution - here Side Note: Block quoted < code > doesn't work for long strings of code A: The only answer I can think of right now is - read the manual for uudeview. As much as I would like to help you, I am not an expert in this area, so you'll have to wait for someone more knowledgable to come down here and help you. Meanwhile I can give you links to some documents that might help you: UUDeview Home Page Using XDFLengine Gettting started with the XDFL Engine Sorry if this doesn't help you. A: You don't have to get out of Ruby to do this, can use the Base64 module in Ruby to encode the document like this: irb(main):005:0> require 'base64' => true irb(main):007:0> Base64.encode64("Hello World") => "SGVsbG8gV29ybGQ=\n" irb(main):008:0> Base64.decode64("SGVsbG8gV29ybGQ=\n") => "Hello World" And you can call gzip/gunzip using Kernel#system: system("gzip foo.something") system("gunzip foo.something.gz")
How can I modify .xfdl files? (Update #1)
The .XFDL file extension identifies XFDL Formatted Document files. These belong to the XML-based document and template formatting standard. This format is exactly like the XML file format however, contains a level of encryption for use in secure communications. I know how to view XFDL files using a file viewer I found here. I can also modify and save these files by doing File:Save/Save As. I'd like, however, to modify these files on the fly. Any suggestions? Is this even possible? Update #1: I have now successfully decoded and unziped a .xfdl into an XML file which I can then edit. Now, I am looking for a way to re-encode the modified XML file back into base64-gzip (using Ruby or the command line)
[ "If the encoding is base64 then this is the solution I've stumbled upon on the web:\n\"Decoding XDFL files saved with 'encoding=base64'.\nFiles saved with: \napplication/vnd.xfdl;content-encoding=\"base64-gzip\"\n\nare simple base64-encoded gzip files. They can be easily restored to XML by first decoding and then unzipping them. This can be done as follows on Ubuntu:\nsudo apt-get install uudeview\nuudeview -i yourform.xfdl\ngunzip -S \"\" < UNKNOWN.001 > yourform-unpacked.xfdl \n\nThe first command will install uudeview, a package that can decode base64, among others. You can skip this step once it is installed.\nAssuming your form is saved as 'yourform.xfdl', the uudeview command will decode the contents as 'UNKNOWN.001', since the xfdl file doesn't contain a file name. The '-i' option makes uudeview uninteractive, remove that option for more control.\nThe last command gunzips the decoded file into a file named 'yourform-unpacked.xfdl'.\nAnother possible solution - here\nSide Note: Block quoted < code > doesn't work for long strings of code\n", "The only answer I can think of right now is - read the manual for uudeview.\nAs much as I would like to help you, I am not an expert in this area, so you'll have to wait for someone more knowledgable to come down here and help you.\nMeanwhile I can give you links to some documents that might help you:\n\nUUDeview Home Page\nUsing XDFLengine\nGettting started with the XDFL Engine\n\nSorry if this doesn't help you.\n", "You don't have to get out of Ruby to do this, can use the Base64 module in Ruby to encode the document like this:\nirb(main):005:0> require 'base64'\n=> true\n\nirb(main):007:0> Base64.encode64(\"Hello World\")\n=> \"SGVsbG8gV29ybGQ=\\n\"\n\nirb(main):008:0> Base64.decode64(\"SGVsbG8gV29ybGQ=\\n\")\n=> \"Hello World\"\n\nAnd you can call gzip/gunzip using Kernel#system:\nsystem(\"gzip foo.something\")\nsystem(\"gunzip foo.something.gz\")\n\n" ]
[ 5, 2, 1 ]
[]
[]
[ "language_agnostic", "ruby", "xfdl" ]
stackoverflow_0000001615_language_agnostic_ruby_xfdl.txt
Q: SQL query for a database scheme In SQL Server how do you query a database to bring back all the tables that have a field of a specific name? A: The following query will bring back a unique list of tables where Column_Name is equal to the column you are looking for: SELECT Table_Name FROM INFORMATION_SCHEMA.COLUMNS WHERE Column_Name = 'Desired_Column_Name' GROUP BY Table_Name A: SELECT Table_Name FROM Information_Schema.Columns WHERE Column_Name = 'YourFieldName' A: I'm old-school: SELECT DISTINCT object_name(id) FROM syscolumns WHERE name = 'FIELDNAME'
SQL query for a database scheme
In SQL Server how do you query a database to bring back all the tables that have a field of a specific name?
[ "The following query will bring back a unique list of tables where Column_Name is equal to the column you are looking for:\nSELECT Table_Name\nFROM INFORMATION_SCHEMA.COLUMNS\nWHERE Column_Name = 'Desired_Column_Name'\nGROUP BY Table_Name\n\n", "SELECT Table_Name\nFROM Information_Schema.Columns\nWHERE Column_Name = 'YourFieldName'\n\n", "I'm old-school:\nSELECT DISTINCT object_name(id)\nFROM syscolumns\nWHERE name = 'FIELDNAME'\n\n" ]
[ 7, 0, 0 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0000003567_sql_sql_server.txt
Q: Looking for code to render a form that displays a view of an object I've got the task of displaying a web form to represent the properties in a .NET class. In WinForms, there's a pre-fab control named PropertyGrid that is a lot like what I need. I'm just looking for something to display a simple layout of property names next to an appropriate control like a textbox for strings or a dropdownlist for enum properties. Does anything like this already exist for ASP.NET, or will I be rolling my own here? A: ASP.Net PropertyGrid
Looking for code to render a form that displays a view of an object
I've got the task of displaying a web form to represent the properties in a .NET class. In WinForms, there's a pre-fab control named PropertyGrid that is a lot like what I need. I'm just looking for something to display a simple layout of property names next to an appropriate control like a textbox for strings or a dropdownlist for enum properties. Does anything like this already exist for ASP.NET, or will I be rolling my own here?
[ "ASP.Net PropertyGrid\n" ]
[ 8 ]
[]
[]
[ "asp.net", "controls", "propertygrid" ]
stackoverflow_0000003757_asp.net_controls_propertygrid.txt
Q: Setup Visual Studio 2005 to print line numbers How can I get line numbers to print in Visual Studio 2005 when printing code listings? A: Isn't there an option in the Print Dialog? Edit: There is. Go to File => Print, and then in the bottom left there is "Print what" and then "Include line Numbers" A: There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)!
Setup Visual Studio 2005 to print line numbers
How can I get line numbers to print in Visual Studio 2005 when printing code listings?
[ "Isn't there an option in the Print Dialog?\nEdit: There is. Go to File => Print, and then in the bottom left there is \"Print what\" and then \"Include line Numbers\"\n", "There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)!\n" ]
[ 7, 5 ]
[]
[]
[ "line_numbers", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000003809_line_numbers_visual_studio_visual_studio_2005.txt
Q: More vs. Faster Cores on a Webserver The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0). My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests. But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better? A: For something like a webserver, dividing up the tasks of handling each connection is (relatively) easy. I say it's safe to say that web servers is one of the most common (and ironed out) uses of parallel code. And since you are able to split up much of the processing into multiple discrete threads, more cores actually does benefit you. This is one of the big reasons why shared hosting is even possible. If server software like IIS and Apache couldn't run requests in parallel it would mean that every page request would have to be dished out in a queue fashion...likely making load times unbearably slow. This also why high end server Operating Systems like Windows 2008 Server Enterprise support something like 64 cores and 2TB of RAM. These are applications that can actually take advantage of that many cores. Also, since each request is likely has low CPU load, you can probably (for some applications) get away with more slower cores. But obviously having each core faster can mean being able to get each task done quicker and, in theory, handle more tasks and more server requests. A: We use apache on linux, which forks a process to handle requests. We've found that more cores help our throughput, since they reduce the latency of processes waiting to be placed on the run queue. I don't have much experience with IIS, but I imagine the same scenario applies with its thread pool. A: Mark Harrison said: I don't have much experience with IIS, but I imagine the same scenario applies with its thread pool. Indeed - more cores = more threads running concurrently. IIS is inherently multithreaded, and takes easy advantage of this. A: The more the better. As programming languages start to become more complex and abstract, the more processing power that will be required. Atleat Jeff believes Quadcore is better.
More vs. Faster Cores on a Webserver
The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0). My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests. But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better?
[ "For something like a webserver, dividing up the tasks of handling each connection is (relatively) easy. I say it's safe to say that web servers is one of the most common (and ironed out) uses of parallel code. And since you are able to split up much of the processing into multiple discrete threads, more cores actually does benefit you. This is one of the big reasons why shared hosting is even possible. If server software like IIS and Apache couldn't run requests in parallel it would mean that every page request would have to be dished out in a queue fashion...likely making load times unbearably slow.\nThis also why high end server Operating Systems like Windows 2008 Server Enterprise support something like 64 cores and 2TB of RAM. These are applications that can actually take advantage of that many cores.\nAlso, since each request is likely has low CPU load, you can probably (for some applications) get away with more slower cores. But obviously having each core faster can mean being able to get each task done quicker and, in theory, handle more tasks and more server requests.\n", "We use apache on linux, which forks a process to handle requests. We've found that more cores help our throughput, since they reduce the latency of processes waiting to be placed on the run queue. I don't have much experience with IIS, but I imagine the same scenario applies with its thread pool.\n", "Mark Harrison said:\n\nI don't have much experience with IIS, but I imagine the same scenario applies with its thread pool.\n\nIndeed - more cores = more threads running concurrently. IIS is inherently multithreaded, and takes easy advantage of this.\n", "The more the better. As programming languages start to become more complex and abstract, the more processing power that will be required.\nAtleat Jeff believes Quadcore is better.\n" ]
[ 16, 3, 3, 1 ]
[]
[]
[ "asp.net", "hardware", "iis", "windows" ]
stackoverflow_0000003801_asp.net_hardware_iis_windows.txt
Q: How do I best detect an ASP.NET expired session? I need to detect when a session has expired in my Visuial Basic web application. This is what I'm using... Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If CurrentSession.IsNew AndAlso (Not Page.Request.Headers("Cookie") Is Nothing) AndAlso (Page.Request.Headers("Cookie").IndexOf("ASP.NET_SessionId") >= 0) Then Response.Redirect("TimeOut.aspx") End If ...do something... End Sub Note: CurrentSession.IsNew returns HttpContext.Current.Session.IsNewSession This seems to work well for Internet Explorer, but seems to fail with Firefox. A: Try the following If Session("whatever") IsNot Nothing Then
How do I best detect an ASP.NET expired session?
I need to detect when a session has expired in my Visuial Basic web application. This is what I'm using... Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If CurrentSession.IsNew AndAlso (Not Page.Request.Headers("Cookie") Is Nothing) AndAlso (Page.Request.Headers("Cookie").IndexOf("ASP.NET_SessionId") >= 0) Then Response.Redirect("TimeOut.aspx") End If ...do something... End Sub Note: CurrentSession.IsNew returns HttpContext.Current.Session.IsNewSession This seems to work well for Internet Explorer, but seems to fail with Firefox.
[ "Try the following\nIf Session(\"whatever\") IsNot Nothing Then\n\n" ]
[ 10 ]
[]
[]
[ "asp.net", "http", "session" ]
stackoverflow_0000003831_asp.net_http_session.txt
Q: How do you typeset code elements in normal text? What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable. Any thoughts on best practices for this? A: I usually take that approach, but if I feel like it's going to cause confusion, I'll use ellipses like: myFunction(...) I guess if I were good, I would use those any time I was omitting parameters from a function in text. A: I would simply be a little more careful with the name of my variables and parameters, most people will then be able to guess much more accurately what type of data you want to hold in it.
How do you typeset code elements in normal text?
What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable. Any thoughts on best practices for this?
[ "I usually take that approach, but if I feel like it's going to cause confusion, I'll use ellipses like: myFunction(...)\nI guess if I were good, I would use those any time I was omitting parameters from a function in text.\n", "I would simply be a little more careful with the name of my variables and parameters, most people will then be able to guess much more accurately what type of data you want to hold in it.\n" ]
[ 3, 1 ]
[]
[]
[ "format", "language_agnostic" ]
stackoverflow_0000003802_format_language_agnostic.txt
Q: Adobe Flex component events I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play? A: Figured it out (finally) Custom Component <?xml version="1.0" encoding="utf-8"?> <mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" x="0" y="0" width="215" height="102" styleName="leftListItemPanel" backgroundColor="#ECECEC" horizontalScrollPolicy="off" verticalScrollPolicy="off"> <mx:Script> <![CDATA[ [Bindable] public var Title:String = ""; [Bindable] public var Description:String = ""; [Bindable] public var Icon:String = ""; [Bindable] public var FileID:String = ""; private function viewClickHandler():void{ dispatchEvent(new Event("viewClick", true));// bubble to parent } ]]> </mx:Script> <mx:Metadata> [Event(name="viewClick", type="flash.events.Event")] </mx:Metadata> <mx:Label x="11" y="9" text="{String(Title)}" styleName="listItemLabel"/> <mx:TextArea x="11" y="25" height="36" width="170" backgroundAlpha="0.0" alpha="0.0" styleName="listItemDesc" wordWrap="true" editable="false" text="{String(Description)}"/> <mx:Button x="20" y="65" label="View" click="viewClickHandler();" styleName="listItemButton" height="22" width="60"/> <mx:LinkButton x="106" y="68" label="Details..." styleName="listItemLink" height="18"/> <mx:HRule x="0" y="101" width="215"/> The Repeater <mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off"> <mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1"> <mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}"> <sm:SmallCourseListItem viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileName);" Description="{rptrSpotlight.currentItem.fileDescription}" FileID = "{rptrRecentlyViewed.currentItem.fileName}" Title="{rptrSpotlight.currentItem.fileTitle}" /> </mx:Repeater> </mx:VBox> </mx:Canvas> Handling function private function PlayFile(fileName:String):void{ Alert.show(fileName.toString()); } A: On your custom component you can listen to the button click event and then generate a custom event that holds information about the file you want to play. You can then set the bubbles property to true on the event and dispatch the custom event from your custom component. The bubbles property will make your event float up the display list and reach your main application. Now on your main application you can listen to that event and play the correct file. Hope this helps.
Adobe Flex component events
I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play?
[ "Figured it out (finally)\nCustom Component\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<mx:Canvas xmlns:mx=\"http://www.adobe.com/2006/mxml\" x=\"0\" y=\"0\" width=\"215\" height=\"102\" styleName=\"leftListItemPanel\" backgroundColor=\"#ECECEC\" horizontalScrollPolicy=\"off\" verticalScrollPolicy=\"off\">\n<mx:Script>\n <![CDATA[\n [Bindable] public var Title:String = \"\";\n [Bindable] public var Description:String = \"\";\n [Bindable] public var Icon:String = \"\"; \n [Bindable] public var FileID:String = \"\";\n private function viewClickHandler():void{\n dispatchEvent(new Event(\"viewClick\", true));// bubble to parent\n }\n ]]>\n</mx:Script>\n<mx:Metadata>\n [Event(name=\"viewClick\", type=\"flash.events.Event\")]\n</mx:Metadata>\n<mx:Label x=\"11\" y=\"9\" text=\"{String(Title)}\" styleName=\"listItemLabel\"/>\n<mx:TextArea x=\"11\" y=\"25\" height=\"36\" width=\"170\" backgroundAlpha=\"0.0\" alpha=\"0.0\" styleName=\"listItemDesc\" wordWrap=\"true\" editable=\"false\" text=\"{String(Description)}\"/>\n<mx:Button x=\"20\" y=\"65\" label=\"View\" click=\"viewClickHandler();\" styleName=\"listItemButton\" height=\"22\" width=\"60\"/>\n<mx:LinkButton x=\"106\" y=\"68\" label=\"Details...\" styleName=\"listItemLink\" height=\"18\"/>\n<mx:HRule x=\"0\" y=\"101\" width=\"215\"/>\n\n\nThe Repeater\n<mx:Canvas id=\"pnlSpotlight\" label=\"SPOTLIGHT\" height=\"100%\" width=\"100%\" horizontalScrollPolicy=\"off\">\n <mx:VBox width=\"100%\" height=\"80%\" paddingTop=\"2\" paddingBottom=\"1\" verticalGap=\"1\">\n <mx:Repeater id=\"rptrSpotlight\" dataProvider=\"{aSpotlight}\"> \n <sm:SmallCourseListItem \n viewClick=\"PlayFile(event.currentTarget.getRepeaterItem().fileName);\"\n Description=\"{rptrSpotlight.currentItem.fileDescription}\"\n FileID = \"{rptrRecentlyViewed.currentItem.fileName}\" \n Title=\"{rptrSpotlight.currentItem.fileTitle}\" />\n </mx:Repeater>\n </mx:VBox>\n</mx:Canvas>\n\nHandling function\nprivate function PlayFile(fileName:String):void{\n Alert.show(fileName.toString());\n}\n\n", "On your custom component you can listen to the button click event and then generate a custom event that holds information about the file you want to play. You can then set the bubbles property to true on the event and dispatch the custom event from your custom component. The bubbles property will make your event float up the display list and reach your main application. Now on your main application you can listen to that event and play the correct file. Hope this helps.\n" ]
[ 1, 1 ]
[]
[]
[ "actionscript_3", "apache_flex" ]
stackoverflow_0000003856_actionscript_3_apache_flex.txt
Q: Is this a good way to determine OS Architecture? Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system. private Boolean is64BitOperatingSystem() { RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment"); String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE"); if (processorArchitecture.Equals("x86")) { return false; } else { return true; } } It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method? Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time. A: Take a look at Raymond Chens solution: How to detect programmatically whether you are running on 64-bit Windows and here's the PINVOKE for .NET: IsWow64Process (kernel32) Update: I'd take issue with checking for 'x86'. Who's to say what intel's or AMD's next 32 bit processor may be designated as. The probability is low but it is a risk. You should ask the OS to determine this via the correct API's, not by querying what could be a OS version/platform specific value that may be considered opaque to the outside world. Ask yourself the questions, 1 - is the registry entry concerned properly documented by MS, 2 - If it is do they provide a definitive list of possible values that is guaranteed to permit you as a developer to make the informed decision between whether you are running 32 bit or 64 bit. If the answer is no, then call the API's, yeah it's a but more long winded but it is documented and definitive. A: The easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size. I believe the value of IntPtr.Size is 4 for a 32bit app that's running under WOW, isn't it? Edit: @Edit: Yeah. :) A: Looking into the registry is perfectly valid, so long as you can be sure that the user of the application will always have access to what you need. A: The easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size. EDIT: Doh! This will tell you whether or not the current process is 64-bit, not the OS as a whole. Sorry!
Is this a good way to determine OS Architecture?
Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system. private Boolean is64BitOperatingSystem() { RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment"); String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE"); if (processorArchitecture.Equals("x86")) { return false; } else { return true; } } It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method? Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time.
[ "Take a look at Raymond Chens solution:\nHow to detect programmatically whether you are running on 64-bit Windows\nand here's the PINVOKE for .NET:\nIsWow64Process (kernel32)\nUpdate: I'd take issue with checking for 'x86'. Who's to say what intel's or AMD's next 32 bit processor may be designated as. The probability is low but it is a risk. You should ask the OS to determine this via the correct API's, not by querying what could be a OS version/platform specific value that may be considered opaque to the outside world. Ask yourself the questions, 1 - is the registry entry concerned properly documented by MS, 2 - If it is do they provide a definitive list of possible values that is guaranteed to permit you as a developer to make the informed decision between whether you are running 32 bit or 64 bit. If the answer is no, then call the API's, yeah it's a but more long winded but it is documented and definitive. \n", "\nThe easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size.\n\nI believe the value of IntPtr.Size is 4 for a 32bit app that's running under WOW, isn't it?\nEdit: @Edit: Yeah. :)\n", "Looking into the registry is perfectly valid, so long as you can be sure that the user of the application will always have access to what you need.\n", "The easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size.\nEDIT: Doh! This will tell you whether or not the current process is 64-bit, not the OS as a whole. Sorry!\n" ]
[ 8, 2, 1, 1 ]
[]
[]
[ "c#", "registry", "windows" ]
stackoverflow_0000003903_c#_registry_windows.txt
Q: Multi-Paradigm Languages In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms. Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision? A: It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better. Some people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go. Steve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well. A: If you're doing something for yourself, or if you're doing just a prototype, or testing an idea... use the free style that script languages gives you. After that: always think in objects, try to organize your work around the OO paradigm even if you're writing procedural stuff. Then, refactorize, refactorize, refactorize.
Multi-Paradigm Languages
In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms. Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision?
[ "It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better.\nSome people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go.\nSteve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well.\n", "If you're doing something for yourself, or if you're doing just a prototype, or testing an idea... use the free style that script languages gives you. \nAfter that: always think in objects, try to organize your work around the OO paradigm even if you're writing procedural stuff. Then, refactorize, refactorize, refactorize.\n" ]
[ 11, 2 ]
[]
[]
[ "oop", "paradigms", "php", "procedural" ]
stackoverflow_0000003978_oop_paradigms_php_procedural.txt
Q: How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX? I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition. All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password. The Vista 64 guy can see the Mac on the network, but his login is rejected every time. Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas? A: Try changing the local security policy on that Vista box for "Local Policies\Security Options\Network Security: LAN manager authentication level" from “Send NTLMv2 response only” to “Send LM & NTLM - use NTLMv2 session security if negotiated”. A: No I have successfully done this with my Vista 64-bit machine. You may want to try using the IP Address of the machine and try connecting that way. Or maybe check out the log files on the Mac to see what the rejection error was.
How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX?
I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition. All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password. The Vista 64 guy can see the Mac on the network, but his login is rejected every time. Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas?
[ "Try changing the local security policy on that Vista box for \"Local Policies\\Security Options\\Network Security: LAN manager authentication level\" from “Send NTLMv2 response only” to “Send LM & NTLM - use NTLMv2 session security if negotiated”.\n", "No I have successfully done this with my Vista 64-bit machine. You may want to try using the IP Address of the machine and try connecting that way. Or maybe check out the log files on the Mac to see what the rejection error was.\n" ]
[ 3, 0 ]
[]
[]
[ "macos", "smb", "windows_vista" ]
stackoverflow_0000003996_macos_smb_windows_vista.txt
Q: Linking two Office documents Problem: I have two spreadsheets that each serve different purposes but contain one particular piece of data that needs to be the same in both spreadsheets. This piece of data (one of the columns) gets updated in spreadsheet A but needs to also be updated in spreadsheet B. Goal: A solution that would somehow link these two spreadsheets together (keep in mind that they exist on two separate LAN shares on the network) so that when A is updated, B is automatically updated for the corresponding record. *Note that I understand fully that a database would probably be a better plan for tasks such as these but unfortunately I have no say in that matter. **Note also that this needs to work for Office 2003 and Office 2007 A: So you mean that AD743 on spreadsheet B must be equal to AD743 on spreadsheet A? Try this: Open both spreadsheets on the same machine. Go to AD743 on spreadsheet B. Type =. Go to spreadsheed A and click on AD743. Press enter. You'll notice that the formula is something like '[path-to-file+file-name].worksheet-name!AD743'. The value on spreadsheet B will be updated when you open it. In fact, it will ask you if you want to update. Of course, your connection must be up and running for it to update. Also, you can't change the name or the path of spreadsheet A. A: I can't say if this is overkill without knowing the details of your usage case, but consider creating a spreadsheet C to hold all data held in common between the two. Links can become dizzyingly complex as spreadsheets age, and having a shared data source might help clear up the confusion. Perhaps even more "enterprise-y" is the concept of just pasting in all data that otherwise would be shared. That is the official best practice in my company, because external links have caused so much trouble with maintainability. It may seem cumbersome at first, but I've found it may just be the best way to promote maintainability in addition to ease of use, assuming you don't mind the manual intervention.
Linking two Office documents
Problem: I have two spreadsheets that each serve different purposes but contain one particular piece of data that needs to be the same in both spreadsheets. This piece of data (one of the columns) gets updated in spreadsheet A but needs to also be updated in spreadsheet B. Goal: A solution that would somehow link these two spreadsheets together (keep in mind that they exist on two separate LAN shares on the network) so that when A is updated, B is automatically updated for the corresponding record. *Note that I understand fully that a database would probably be a better plan for tasks such as these but unfortunately I have no say in that matter. **Note also that this needs to work for Office 2003 and Office 2007
[ "So you mean that AD743 on spreadsheet B must be equal to AD743 on spreadsheet A? Try this:\n\nOpen both spreadsheets on the same\nmachine.\nGo to AD743 on spreadsheet B.\nType =.\nGo to spreadsheed A and click on\nAD743.\nPress enter.\n\nYou'll notice that the formula is something like '[path-to-file+file-name].worksheet-name!AD743'.\nThe value on spreadsheet B will be updated when you open it. In fact, it will ask you if you want to update. Of course, your connection must be up and running for it to update. Also, you can't change the name or the path of spreadsheet A.\n", "I can't say if this is overkill without knowing the details of your usage case, but consider creating a spreadsheet C to hold all data held in common between the two. Links can become dizzyingly complex as spreadsheets age, and having a shared data source might help clear up the confusion.\nPerhaps even more \"enterprise-y\" is the concept of just pasting in all data that otherwise would be shared. That is the official best practice in my company, because external links have caused so much trouble with maintainability. It may seem cumbersome at first, but I've found it may just be the best way to promote maintainability in addition to ease of use, assuming you don't mind the manual intervention.\n" ]
[ 5, 0 ]
[]
[]
[ "office_2003", "office_2007" ]
stackoverflow_0000003045_office_2003_office_2007.txt
Q: Why doesn't VFP .NET OLEdb provider work in 64 bit Windows? I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP .NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please... A: Have you tried changing the target CPU to x86 instead of "Any CPU" in the advanced compiler options? I know that this solves some problems with other OLEDB providers by forcing the use of the 32-bit version. A: You'll need to compile with the target CPU set to x86 to force your code to use the 32 bit version of the VFP OLE Db provider. Microsoft has stated that there are no plans on releasing a 64-bit edition of the Visual FoxPro OLE Db provider. For what's worth, Microsoft has also stated that VFP 9 is the final version of Visual FoxPro and support will end in 2015. If you need the OLE DB provider for VFP 9, you can get it here. A: Sybase Anywhere has a OLEDB provider for VFP tables. It states in the page that the server supports 64 bit Windows, don't know about the OLEDB provider: Support 64-bit Windows and Linux Servers In order to further enhance scalability, support for the x86_64 architecture was added to the Advantage Database Servers for Windows and Linux. On computers with an x86_64 processor and a 64 bit Operating System the Advantage Database Server will now be able to use memory in excess of 4GB. The extra memory will allow more users to access the server concurrently and increase the amount of information the server can cache when processing queries. I didn't try it by myself, but some people of the VFP newsgroups reports that it works OK. Link to the Advantage Server / VFP Page
Why doesn't VFP .NET OLEdb provider work in 64 bit Windows?
I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP .NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please...
[ "Have you tried changing the target CPU to x86 instead of \"Any CPU\" in the advanced compiler options? I know that this solves some problems with other OLEDB providers by forcing the use of the 32-bit version.\n", "You'll need to compile with the target CPU set to x86 to force your code to use the 32 bit version of the VFP OLE Db provider. \nMicrosoft has stated that there are no plans on releasing a 64-bit edition of the Visual FoxPro OLE Db provider. For what's worth, Microsoft has also stated that VFP 9 is the final version of Visual FoxPro and support will end in 2015. If you need the OLE DB provider for VFP 9, you can get it here.\n", "Sybase Anywhere has a OLEDB provider for VFP tables. It states in the page that the server supports 64 bit Windows, don't know about the OLEDB provider:\n\nSupport 64-bit Windows and Linux Servers\nIn order to further enhance scalability, support for the x86_64 architecture was added to the Advantage Database Servers for Windows and Linux. On computers with an x86_64 processor and a 64 bit Operating System the Advantage Database Server will now be able to use memory in excess of 4GB. The extra memory will allow more users to access the server concurrently and increase the amount of information the server can cache when processing queries.\n\nI didn't try it by myself, but some people of the VFP newsgroups reports that it works OK.\nLink to the Advantage Server / VFP Page\n" ]
[ 15, 9, 1 ]
[]
[]
[ ".net", "legacy", "oledb", "sql_server_2005", "visual_foxpro" ]
stackoverflow_0000000717_.net_legacy_oledb_sql_server_2005_visual_foxpro.txt
Q: Searching directories for tons of files? I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked? A: I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out! The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure... A: Definitely split them up. That said, stay as far away from the Indexing Service as you can. A: None. .NET relies on underlying Windows API calls that really, really hate that amount of files themselves. As Ronnie says: split them up. A: You could use DOS? DIR /s/b > Files.txt A: You could also look at either indexing the files yourself, or getting a third part app like google desktop or copernic to do it and then interface with their index. I know copernic has an API that you can use to search for any file in their index and it also supports mapping network drives.
Searching directories for tons of files?
I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked?
[ "I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out!\nThe convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure...\n", "Definitely split them up. That said, stay as far away from the Indexing Service as you can.\n", "None. .NET relies on underlying Windows API calls that really, really hate that amount of files themselves.\nAs Ronnie says: split them up.\n", "You could use DOS?\nDIR /s/b > Files.txt\n\n", "You could also look at either indexing the files yourself, or getting a third part app like google desktop or copernic to do it and then interface with their index. I know copernic has an API that you can use to search for any file in their index and it also supports mapping network drives.\n" ]
[ 6, 2, 1, 1, 1 ]
[]
[]
[ "c#", "directory", "file_management" ]
stackoverflow_0000003512_c#_directory_file_management.txt
Q: Programmatically talking to a Serial Port in OS X or Linux I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial . When i do this everything seems to be hunky-dory: stty -f /dev/cu.usbserial speed 9600 baud; lflags: -icanon -isig -iexten -echo iflags: -icrnl -ixon -ixany -imaxbel -brkint oflags: -opost -onlcr -oxtabs cflags: cs8 -parenb Everything also works when I use the serial port tool to talk to it. If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost. #!/usr/bin/python import serial ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10) ser.write("<ID01><PA> \r\n") read_chars = ser.read(20) print read_chars ser.close() So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial? Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned. A: /dev/cu.xxxxx is the "callout" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the "dialin" device, used for monitoring a port for incoming calls for e.g. a fax listener. A: have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.
Programmatically talking to a Serial Port in OS X or Linux
I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial . When i do this everything seems to be hunky-dory: stty -f /dev/cu.usbserial speed 9600 baud; lflags: -icanon -isig -iexten -echo iflags: -icrnl -ixon -ixany -imaxbel -brkint oflags: -opost -onlcr -oxtabs cflags: cs8 -parenb Everything also works when I use the serial port tool to talk to it. If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost. #!/usr/bin/python import serial ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10) ser.write("<ID01><PA> \r\n") read_chars = ser.read(20) print read_chars ser.close() So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial? Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
[ "/dev/cu.xxxxx is the \"callout\" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the \"dialin\" device, used for monitoring a port for incoming calls for e.g. a fax listener.\n", "have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.\n" ]
[ 5, 0 ]
[]
[]
[ "linux", "macos", "python", "serial_port" ]
stackoverflow_0000003976_linux_macos_python_serial_port.txt
Q: How can I tell if a web client is blocking advertisements? What is the best way to record statistics on the number of visitors visiting my site that have set their browser to block ads? A: Since programs like AdBlock actually never request the advert, you would have to look the server logs to see if the same user accessed a webpage but didn't access an advert. This is assuming the advert is on the same server. If your adverts are on a separate server, then I would suggest it's impossible to do so. The best way to stop users from blocking adverts, is to have inline text adverts which are generated by the server and dished up inside your html. A: Add the user id to the request for the ad: <img src="./ads/viagra.jpg?{user.id}"/> that way you can check what ads are seen by which users. A: You need to think about the different ways that ads are blocked. The first thing to look at is whether they are running noscript, so you could add a script that would check for that. The next thing is to see if they are blocking flash, a small movie should do that. If you look at the adblock site, there is some indication of how it does blocking: How does element hiding work? If you look further down that page, you will see that conventional chrome probing will not work, so you need to try and parse the altered DOM. A: AdBlock forum says this is used to detect AdBlock. After some tweaking you could use this to gather some statistics. setTimeout("detect_abp()", 10000); var isFF = (navigator.userAgent.indexOf("Firefox") > -1) ? true : false, hasABP = false; function detect_abp() { if(isFF) { if(Components.interfaces.nsIAdblockPlus != undefined) { hasABP = true; } else { var AbpImage = document.createElement("img"); AbpImage.id = "abp_detector"; AbpImage.src = "/textlink-ads.jpg"; AbpImage.style.width = "0"; AbpImage.style.height = "0"; AbpImage.style.top = "-1000px"; AbpImage.style.left = "-1000px"; document.body.appendChild(AbpImage); hasABP = (document.getElementById("abp_detector").style.display == "none"); var e = document.getElementsByTagName("iframe"); for (var i = 0; i < e.length; i++) { if(e[i].clientHeight == 0) { hasABP = true; } } if(hasABP == true) { history.go(1); location = "http://www.tweaktown.com/supportus.html"; window.location(location); } } } } A: I suppose you could compare the ad prints with the page views on your website (which you can get from your analytics software).
How can I tell if a web client is blocking advertisements?
What is the best way to record statistics on the number of visitors visiting my site that have set their browser to block ads?
[ "Since programs like AdBlock actually never request the advert, you would have to look the server logs to see if the same user accessed a webpage but didn't access an advert. This is assuming the advert is on the same server.\nIf your adverts are on a separate server, then I would suggest it's impossible to do so.\nThe best way to stop users from blocking adverts, is to have inline text adverts which are generated by the server and dished up inside your html.\n", "Add the user id to the request for the ad:\n<img src=\"./ads/viagra.jpg?{user.id}\"/>\n\nthat way you can check what ads are seen by which users.\n", "You need to think about the different ways that ads are blocked. The first thing to look at is whether they are running noscript, so you could add a script that would check for that. \nThe next thing is to see if they are blocking flash, a small movie should do that.\nIf you look at the adblock site, there is some indication of how it does blocking:\nHow does element hiding work?\nIf you look further down that page, you will see that conventional chrome probing will not work, so you need to try and parse the altered DOM.\n", "AdBlock forum says this is used to detect AdBlock. After some tweaking you could use this to gather some statistics.\nsetTimeout(\"detect_abp()\", 10000);\nvar isFF = (navigator.userAgent.indexOf(\"Firefox\") > -1) ? true : false,\n hasABP = false;\n\nfunction detect_abp() {\n if(isFF) {\n if(Components.interfaces.nsIAdblockPlus != undefined) {\n hasABP = true;\n } else {\n var AbpImage = document.createElement(\"img\");\n AbpImage.id = \"abp_detector\";\n AbpImage.src = \"/textlink-ads.jpg\";\n AbpImage.style.width = \"0\";\n AbpImage.style.height = \"0\";\n AbpImage.style.top = \"-1000px\";\n AbpImage.style.left = \"-1000px\";\n document.body.appendChild(AbpImage);\n hasABP = (document.getElementById(\"abp_detector\").style.display == \"none\");\n\n var e = document.getElementsByTagName(\"iframe\");\n for (var i = 0; i < e.length; i++) {\n if(e[i].clientHeight == 0) {\n hasABP = true;\n }\n }\n if(hasABP == true) {\n history.go(1);\n location = \"http://www.tweaktown.com/supportus.html\";\n window.location(location);\n }\n }\n }\n}\n\n", "I suppose you could compare the ad prints with the page views on your website (which you can get from your analytics software).\n" ]
[ 11, 10, 4, 4, 3 ]
[]
[]
[ "analytics", "browser" ]
stackoverflow_0000002472_analytics_browser.txt
Q: ConfigurationManager.AppSettings Performance Concerns I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly. Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing .Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read). If anyone has experience with this, I would greatly appreciate the input. Update: I should probably clarify a few points. This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application. According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.) Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits. Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance. A: since you're using a winforms app, if it's in .net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this If you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq A: Check out SQLite, it seems like a good option for this particular scenario. A: Dylan, Don't use the application config file for this purpose, use a SQL DB (SQLite, MySQL, MSSQL, whatever) because you'll have to worry less about concurrency issues during reads and writes to the config file. You'll also have better flexibility in the type of data you want to store. The appSettings section is just a key/value list which you may outgrow as time passes and as the app matures. You could use custom config sections but then you're into a new problem area when it comes to the design. A: The appSettings isn't really meant for what you are trying to do. When your .NET application starts, it reads in the app.config file, and caches its contents in memory. For that reason, after you write to the app.config file, you'll have to somehow force the runtime to re-parse the app.config file so it can cache the settings again. This is unnecessary The best approach would be to use a database to store your configuration settings. Barring the use of a database, you could easily setup an external XML configuration file. When your application starts, you could cache its contents in a NameValueCollection object or HashTable object. As you change/add settings, you would do it to that cached copy. When your application shuts down, or at an appropriate time interval, you can write the cache contents back out to file. A: Someone correct me if I'm wrong, but I don't think that AppSettings is typically meant to be used for these type of configuration settings. Normally you would only put in settings that remain fairly static (database connection strings, file paths, etc.). If you want to store customizable user settings, it would be better to create a separate preferences file, or ideally store those settings in a database. A: I would not use config files for storing user data. Use a db. A: Could I ask why you're not saving the user's settings in a database? Generally, I save application settings that are changed very infrequently in the appSettings section (the default email address error logs are sent to, the number of minutes after which you are automatically logged out, etc.) The scope of this really is at the application, not at the user, and is generally used for deployment settings. A: one thing I would look at doing is caching the appsettings on a read, then flushing the settings from the cache on the write which should minimize the amount of actual load the server has to deal with for processing the appSettings. Also, if possible, look at breaking the appSettings up into configSections so you can read write and cache related settings. Having said all that, I would seriously consider looking at storing these values in a database as you seem to actually be storing user preferences, and not application settings.
ConfigurationManager.AppSettings Performance Concerns
I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly. Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing .Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read). If anyone has experience with this, I would greatly appreciate the input. Update: I should probably clarify a few points. This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application. According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.) Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits. Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance.
[ "since you're using a winforms app, if it's in .net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this\nIf you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq\n", "Check out SQLite, it seems like a good option for this particular scenario.\n", "Dylan,\nDon't use the application config file for this purpose, use a SQL DB (SQLite, MySQL, MSSQL, whatever) because you'll have to worry less about concurrency issues during reads and writes to the config file. \nYou'll also have better flexibility in the type of data you want to store. The appSettings section is just a key/value list which you may outgrow as time passes and as the app matures. You could use custom config sections but then you're into a new problem area when it comes to the design.\n", "The appSettings isn't really meant for what you are trying to do.\nWhen your .NET application starts, it reads in the app.config file, and caches its contents in memory. For that reason, after you write to the app.config file, you'll have to somehow force the runtime to re-parse the app.config file so it can cache the settings again. This is unnecessary \nThe best approach would be to use a database to store your configuration settings.\nBarring the use of a database, you could easily setup an external XML configuration file. When your application starts, you could cache its contents in a NameValueCollection object or HashTable object. As you change/add settings, you would do it to that cached copy. When your application shuts down, or at an appropriate time interval, you can write the cache contents back out to file.\n", "Someone correct me if I'm wrong, but I don't think that AppSettings is typically meant to be used for these type of configuration settings. Normally you would only put in settings that remain fairly static (database connection strings, file paths, etc.). If you want to store customizable user settings, it would be better to create a separate preferences file, or ideally store those settings in a database.\n", "I would not use config files for storing user data. Use a db.\n", "Could I ask why you're not saving the user's settings in a database?\nGenerally, I save application settings that are changed very infrequently in the appSettings section (the default email address error logs are sent to, the number of minutes after which you are automatically logged out, etc.) The scope of this really is at the application, not at the user, and is generally used for deployment settings.\n", "one thing I would look at doing is caching the appsettings on a read, then flushing the settings from the cache on the write which should minimize the amount of actual load the server has to deal with for processing the appSettings.\nAlso, if possible, look at breaking the appSettings up into configSections so you can read write and cache related settings.\nHaving said all that, I would seriously consider looking at storing these values in a database as you seem to actually be storing user preferences, and not application settings.\n" ]
[ 10, 2, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ ".net", "c#", "configuration", "performance", "properties" ]
stackoverflow_0000004157_.net_c#_configuration_performance_properties.txt
Q: What is the best way to handle multiple permission types? I often encounter the following scenario where I need to offer many different types of permissions. I primarily use ASP.NET / VB.NET with SQL Server 2000. Scenario I want to offer a dynamic permission system that can work on different parameters. Let's say that I want to give either a department or just a specific person access to an application. And pretend that we have a number of applications that keeps growing. In the past, I have chosen one of the following two ways that I know to do this. Use a single permission table with special columns that are used for determining a how to apply the parameters. The special columns in this example are TypeID and TypeAuxID. The SQL would look something like this. SELECT COUNT(PermissionID) FROM application_permissions WHERE (TypeID = 1 AND TypeAuxID = @UserID) OR (TypeID = 2 AND TypeAuxID = @DepartmentID) AND ApplicationID = 1 Use a mapping table for each type of permission, then joining them all together. SELECT COUNT(perm.PermissionID) FROM application_permissions perm LEFT JOIN application_UserPermissions emp ON perm.ApplicationID = emp.ApplicationID LEFT JOIN application_DepartmentPermissions dept ON perm.ApplicationID = dept.ApplicationID WHERE q.SectionID=@SectionID AND (emp.UserID=@UserID OR dept.DeptID=@DeptID OR (emp.UserID IS NULL AND dept.DeptID IS NULL)) AND ApplicationID = 1 ORDER BY q.QID ASC My Thoughts I hope that the examples make sense. I cobbled them together. The first example requires less work, but neither of them feel like the best answer. Is there a better way to handle this? A: I agree with John Downey. Personally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items. "[Flags] public enum Permission { VIEWUSERS = 1, // 2^0 // 0000 0001 EDITUSERS = 2, // 2^1 // 0000 0010 VIEWPRODUCTS = 4, // 2^2 // 0000 0100 EDITPRODUCTS = 8, // 2^3 // 0000 1000 VIEWCLIENTS = 16, // 2^4 // 0001 0000 EDITCLIENTS = 32, // 2^5 // 0010 0000 DELETECLIENTS = 64, // 2^6 // 0100 0000 }" Then, you can combine several permissions using the AND bitwise operator. For example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3. You can then store the permission of one user into a single column of your Database (in our case it would be 3). Inside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not. A: The way I typically go about coding permission systems is having 6 tables. Users - this is pretty straight forward it is your typical users table Groups - this would be synonymous to your departments Roles - this is a table with all permissions generally also including a human readable name and a description Users_have_Groups - this is a many-to-many table of what groups a user belongs to Users_have_Roles - another many-to-many table of what roles are assigned to an individual user Groups_have_Roles - the final many-to-many table of what roles each group has At the beginning of a users session you would run some logic that pulls out every role they have assigned, either directory or through a group. Then you code against those roles as your security permissions. Like I said this is what I typically do but your millage may vary. A: In addition to John Downey and jdecuyper's solutions, I've also added an "Explicit Deny" bit at the end/beginning of the bitfield, so that you can perform additive permissions by group, role membership, and then subtract permissions based upon explicit deny entries, much like NTFS works, permission-wise. A: Honestly the ASP.NET Membership / Roles features would work perfectly for the scenario you described. Writing your own tables / procs / classes is a great exercise and you can get very nice control over minute details, but after doing this myself I've concluded it's better to just use the built in .NET stuff. A lot of existing code is designed to work around it which is nice at well. Writing from scratch took me about 2 weeks and it was no where near as robust as .NETs. You have to code so much crap (password recovery, auto lockout, encryption, roles, a permission interface, tons of procs, etc) and the time could be better spent elsewhere. Sorry if I didn't answer your question, I'm like the guy who says to learn c# when someone asks a vb question. A: An approach I've used in various applications is to have a generic PermissionToken class which has a changeable Value property. Then you query the requested application, it tells you which PermissionTokens are needed in order to use it. For example, the Shipping application might tell you it needs: new PermissionToken() { Target = PermissionTokenTarget.Application, Action = PermissionTokenAction.View, Value = "ShippingApp" }; This can obviously be extended to Create, Edit, Delete etc and, because of the custom Value property, any application, module or widget can define its own required permissions. YMMV, but this has always been an efficient method for me which I have found to scale well.
What is the best way to handle multiple permission types?
I often encounter the following scenario where I need to offer many different types of permissions. I primarily use ASP.NET / VB.NET with SQL Server 2000. Scenario I want to offer a dynamic permission system that can work on different parameters. Let's say that I want to give either a department or just a specific person access to an application. And pretend that we have a number of applications that keeps growing. In the past, I have chosen one of the following two ways that I know to do this. Use a single permission table with special columns that are used for determining a how to apply the parameters. The special columns in this example are TypeID and TypeAuxID. The SQL would look something like this. SELECT COUNT(PermissionID) FROM application_permissions WHERE (TypeID = 1 AND TypeAuxID = @UserID) OR (TypeID = 2 AND TypeAuxID = @DepartmentID) AND ApplicationID = 1 Use a mapping table for each type of permission, then joining them all together. SELECT COUNT(perm.PermissionID) FROM application_permissions perm LEFT JOIN application_UserPermissions emp ON perm.ApplicationID = emp.ApplicationID LEFT JOIN application_DepartmentPermissions dept ON perm.ApplicationID = dept.ApplicationID WHERE q.SectionID=@SectionID AND (emp.UserID=@UserID OR dept.DeptID=@DeptID OR (emp.UserID IS NULL AND dept.DeptID IS NULL)) AND ApplicationID = 1 ORDER BY q.QID ASC My Thoughts I hope that the examples make sense. I cobbled them together. The first example requires less work, but neither of them feel like the best answer. Is there a better way to handle this?
[ "I agree with John Downey.\nPersonally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items.\n\"[Flags]\npublic enum Permission\n{\n VIEWUSERS = 1, // 2^0 // 0000 0001\n EDITUSERS = 2, // 2^1 // 0000 0010\n VIEWPRODUCTS = 4, // 2^2 // 0000 0100\n EDITPRODUCTS = 8, // 2^3 // 0000 1000\n VIEWCLIENTS = 16, // 2^4 // 0001 0000\n EDITCLIENTS = 32, // 2^5 // 0010 0000\n DELETECLIENTS = 64, // 2^6 // 0100 0000\n}\"\n\nThen, you can combine several permissions using the AND bitwise operator. \nFor example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3. \nYou can then store the permission of one user into a single column of your Database (in our case it would be 3).\nInside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not.\n", "The way I typically go about coding permission systems is having 6 tables.\n\nUsers - this is pretty straight forward it is your typical users table\nGroups - this would be synonymous to your departments\nRoles - this is a table with all permissions generally also including a human readable name and a description\nUsers_have_Groups - this is a many-to-many table of what groups a user belongs to\nUsers_have_Roles - another many-to-many table of what roles are assigned to an individual user\nGroups_have_Roles - the final many-to-many table of what roles each group has\n\nAt the beginning of a users session you would run some logic that pulls out every role they have assigned, either directory or through a group. Then you code against those roles as your security permissions.\nLike I said this is what I typically do but your millage may vary.\n", "In addition to John Downey and jdecuyper's solutions, I've also added an \"Explicit Deny\" bit at the end/beginning of the bitfield, so that you can perform additive permissions by group, role membership, and then subtract permissions based upon explicit deny entries, much like NTFS works, permission-wise.\n", "Honestly the ASP.NET Membership / Roles features would work perfectly for the scenario you described. Writing your own tables / procs / classes is a great exercise and you can get very nice control over minute details, but after doing this myself I've concluded it's better to just use the built in .NET stuff. A lot of existing code is designed to work around it which is nice at well. Writing from scratch took me about 2 weeks and it was no where near as robust as .NETs. You have to code so much crap (password recovery, auto lockout, encryption, roles, a permission interface, tons of procs, etc) and the time could be better spent elsewhere.\nSorry if I didn't answer your question, I'm like the guy who says to learn c# when someone asks a vb question.\n", "An approach I've used in various applications is to have a generic PermissionToken class which has a changeable Value property. Then you query the requested application, it tells you which PermissionTokens are needed in order to use it.\nFor example, the Shipping application might tell you it needs:\nnew PermissionToken()\n{\n Target = PermissionTokenTarget.Application,\n Action = PermissionTokenAction.View,\n Value = \"ShippingApp\"\n};\n\nThis can obviously be extended to Create, Edit, Delete etc and, because of the custom Value property, any application, module or widget can define its own required permissions. YMMV, but this has always been an efficient method for me which I have found to scale well.\n" ]
[ 14, 11, 2, 2, 0 ]
[]
[]
[ "permissions", "sql" ]
stackoverflow_0000001451_permissions_sql.txt
Q: Why should I practice Test Driven Development and how should I start? Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice? A: There are a lot of benefits: You get immediate feedback on if your code is working, so you can find bugs faster By seeing the test go from red to green, you know that you have both a working regression test, and working code You gain confidence to refactor existing code, which means you can clean up code without worrying what it might break At the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid The best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article. A: The benefits part has recently been covered, as for where to start....on a small enterprisey system where there aren't too many unknowns so the risks are low. If you don't already know a testing framework (like NUnit), start by learning that. Otherwise start by writing your first test :) A: Benefits You figure out how to compartmentalize your code You figure out exactly what you want your code to do You know how it supposed to act and, down the road, if refactoring breaks anything Gets you in the habit of making sure your code always knows what it is supposed to do Getting Started Just do it. Write a test case for what you want to do, and then write code that should pass the test. If you pass your test, great, you can move on to writing cases where your code will always fail (2+2 should not equal 5, for example). Once all of your tests pass, write your actual business logic to do whatever you want to do. If you are starting from scratch make sure you find a good testing suite that is easy to use. I like PHP so PHPUnit or SimpleTest work well. Almost all of the popular languages have some xUnit testing suite available to help build and automate testing.
Why should I practice Test Driven Development and how should I start?
Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice?
[ "There are a lot of benefits:\n\nYou get immediate feedback on if your code is working, so you can find bugs faster\nBy seeing the test go from red to green, you know that you have both a working regression test, and working code\nYou gain confidence to refactor existing code, which means you can clean up code without worrying what it might break\nAt the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid\n\nThe best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article.\n", "The benefits part has recently been covered, as for where to start....on a small enterprisey system where there aren't too many unknowns so the risks are low. If you don't already know a testing framework (like NUnit), start by learning that. Otherwise start by writing your first test :)\n", "Benefits\n\nYou figure out how to compartmentalize your code\nYou figure out exactly what you want your code to do\nYou know how it supposed to act and, down the road, if refactoring breaks anything\nGets you in the habit of making sure your code always knows what it is supposed to do\n\nGetting Started\nJust do it. Write a test case for what you want to do, and then write code that should pass the test. If you pass your test, great, you can move on to writing cases where your code will always fail (2+2 should not equal 5, for example).\nOnce all of your tests pass, write your actual business logic to do whatever you want to do. \nIf you are starting from scratch make sure you find a good testing suite that is easy to use. I like PHP so PHPUnit or SimpleTest work well. Almost all of the popular languages have some xUnit testing suite available to help build and automate testing.\n" ]
[ 37, 3, 2 ]
[ "In my opinion, the single greatest thing is that it clearly allows you to see if your code does what it is supposed to. This may seem obvious, but it is super easy to run astray of your original goals, as I have found out in the past :p\n" ]
[ -1 ]
[ "tdd", "testing" ]
stackoverflow_0000004303_tdd_testing.txt
Q: Where can I get the Windows Workflow "wca.exe" application? I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest .NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe? A: Should be part of the .NET 3 SDK (and later version as well). If you've already installed this, the path might look something like C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\wca.exe More info on Guy Burstein's blog. A: On my machine, with Visual Studio 2008 installed, it's in C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin
Where can I get the Windows Workflow "wca.exe" application?
I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest .NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe?
[ "Should be part of the .NET 3 SDK (and later version as well). If you've already installed this, the path might look something like \nC:\\Program Files\\Microsoft SDKs\\Windows\\v6.0\\Bin\\wca.exe\nMore info on Guy Burstein's blog.\n", "On my machine, with Visual Studio 2008 installed, it's in \nC:\\Program Files\\Microsoft SDKs\\Windows\\v6.0A\\bin\n" ]
[ 1, 1 ]
[]
[]
[ ".net_3.5" ]
stackoverflow_0000004416_.net_3.5.txt
Q: .htaccess directives to *not* redirect certain URLs In an application that heavily relies on .htaccess RewriteRules for its PrettyURLs (CakePHP in my case), how do I correctly set up directives to exclude certain directories from this rewriting? That is: /appRoot/.htaccess app/ static/ By default every request to /appRoot/* is being rewritten to be picked up by app/webroot/index.php, where it's being analysed and corresponding controller actions are being invoked. This is done by these directives in .htaccess: RewriteBase /appRoot RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] I now want to exclude a few directories like static/ from this rewriting. I tried with this before the Cake RewriteRules: RewriteCond $1 ^(static|otherDir).*$ [NC] RewriteRule (.*) - [L] It works in so far that requests are no longer rewritten, but now all requests are being skipped, even legitimate Cake requests which should not match ^(static|otherDir).*$. I tried several variations of these rules but can't get it to work the way I want. A: And the correct answer iiiiis... RewriteRule ^(a|bunch|of|old|directories).* - [NC,L] # all other requests will be forwarded to Cake RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] I still don't get why the index.php file in the root directory was called initially even with these directives in place. It is now located in /appRoot/app/views/pages/home.ctp and handled through Cake as well. With this in place now, I suppose this would have worked as well (slightly altered version of Mike's suggestion, untested): RewriteCond $1 !^(a|bunch|of|old|directories).*$ [NC] RewriteRule ^(.*)$ app/webroot/$1 [L] A: Remove the [L] from the previous rules: RewriteBase /appRoot RewriteRule ^$ app/webroot/ &nbsp; &nbsp; RewriteRule (.*) app/webroot/$1 [L] means "Stop the rewriting process here and don't apply any more rewriting rules." A: Could you not apply the condition to the following rules, but with negation, as in (with some variation thereof, I'm not too good at remembering .htaccess rules, so the flags might be wrong): RewriteCond $1 !^(static|otherDir).*$ [NC] RewriteRule ^$ app/webroot/ [L] RewriteCond $1 !^(static|otherDir).*$ [NC] RewriteRule ^$ app/webroot/$1 [L]
.htaccess directives to *not* redirect certain URLs
In an application that heavily relies on .htaccess RewriteRules for its PrettyURLs (CakePHP in my case), how do I correctly set up directives to exclude certain directories from this rewriting? That is: /appRoot/.htaccess app/ static/ By default every request to /appRoot/* is being rewritten to be picked up by app/webroot/index.php, where it's being analysed and corresponding controller actions are being invoked. This is done by these directives in .htaccess: RewriteBase /appRoot RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] I now want to exclude a few directories like static/ from this rewriting. I tried with this before the Cake RewriteRules: RewriteCond $1 ^(static|otherDir).*$ [NC] RewriteRule (.*) - [L] It works in so far that requests are no longer rewritten, but now all requests are being skipped, even legitimate Cake requests which should not match ^(static|otherDir).*$. I tried several variations of these rules but can't get it to work the way I want.
[ "And the correct answer iiiiis...\nRewriteRule ^(a|bunch|of|old|directories).* - [NC,L]\n\n# all other requests will be forwarded to Cake\nRewriteRule ^$ app/webroot/ [L]\nRewriteRule (.*) app/webroot/$1 [L]\n\nI still don't get why the index.php file in the root directory was called initially even with these directives in place. It is now located in\n/appRoot/app/views/pages/home.ctp\n\nand handled through Cake as well. With this in place now, I suppose this would have worked as well (slightly altered version of Mike's suggestion, untested):\nRewriteCond $1 !^(a|bunch|of|old|directories).*$ [NC]\nRewriteRule ^(.*)$ app/webroot/$1 [L]\n\n", "Remove the [L] from the previous rules:\nRewriteBase /appRoot\n\nRewriteRule ^$ app/webroot/ &nbsp; &nbsp;\nRewriteRule (.*) app/webroot/$1\n\n[L] means \"Stop the rewriting process here and don't apply any more rewriting rules.\"\n", "Could you not apply the condition to the following rules, but with negation, as in (with some variation thereof, I'm not too good at remembering .htaccess rules, so the flags might be wrong):\nRewriteCond $1 !^(static|otherDir).*$ [NC]\nRewriteRule ^$ app/webroot/ [L]\n\nRewriteCond $1 !^(static|otherDir).*$ [NC]\nRewriteRule ^$ app/webroot/$1 [L]\n\n" ]
[ 6, 1, 1 ]
[]
[]
[ ".htaccess", "apache", "mod_rewrite" ]
stackoverflow_0000003157_.htaccess_apache_mod_rewrite.txt
Q: DataTable Loop Performance Comparison Which of the following has the best performance? I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1. The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c. Method 1 for (int i = 0; i < DataTable.Rows.Count; i++) { // Do Something } Method 2 for (int i = 0, c = DataTable.Rows.Count; i < c; i++) { // Do Something } A: No, it can't do that since there is no way to express constant over time for a value. If the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change. But, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it. So in short, the compiler will not do that optimization if the end-index is anything other than a variable. In the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty. Conclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable. Edit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference.
DataTable Loop Performance Comparison
Which of the following has the best performance? I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1. The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c. Method 1 for (int i = 0; i < DataTable.Rows.Count; i++) { // Do Something } Method 2 for (int i = 0, c = DataTable.Rows.Count; i < c; i++) { // Do Something }
[ "No, it can't do that since there is no way to express constant over time for a value.\nIf the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change.\nBut, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it.\nSo in short, the compiler will not do that optimization if the end-index is anything other than a variable.\nIn the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty.\nConclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable.\n\nEdit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference.\n" ]
[ 6 ]
[]
[]
[ "c#", "loops", "performance" ]
stackoverflow_0000004556_c#_loops_performance.txt
Q: How can I create Prototype Methods (like JavaScript) in C#.Net? How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } How can I go about doing this in C#.Net? A: You can't dynamically add methods to existing objects or classes in .NET, except by changing the source for that class. You can, however, in C# 3.0, use extension methods, which look like new methods, but are compile-time magic. To do this for your code: public static class StringExtensions { public static String trim(this String s) { return s.Trim(); } } To use it: String s = " Test "; s = s.trim(); This looks like a new method, but will compile the exact same way as this code: String s = " Test "; s = StringExtensions.trim(s); What exactly are you trying to accomplish? Perhaps there are better ways of doing what you want? A: It sounds like you're talking about C#'s Extension Methods. You add functionality to existing classes by inserting the "this" keyword before the first parameter. The method has to be a static method in a static class. Strings in .NET already have a "Trim" method, so I'll use another example. public static class MyStringEtensions { public static bool ContainsMabster(this string s) { return s.Contains("Mabster"); } } So now every string has a tremendously useful ContainsMabster method, which I can use like this: if ("Why hello there, Mabster!".ContainsMabster()) { /* ... */ } Note that you can also add extension methods to interfaces (eg IList), which means that any class implementing that interface will also pick up that new method. Any extra parameters you declare in the extension method (after the first "this" parameter) are treated as normal parameters. A: You need to create an extension method, which requires .NET 3.5. The method needs to be static, in a static class. The first parameter of the method needs to be prefixed with "this" in the signature. public static string MyMethod(this string input) { // do things } You can then call it like "asdfas".MyMethod(); A: Using the 3.5 compiler you can use an Extension Method: public static void Trim(this string s) { // implementation } You can use this on a CLR 2.0 targeted project (3.5 compiler) by including this hack: namespace System.Runtime.CompilerServices { [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class | AttributeTargets.Assembly)] public sealed class ExtensionAttribute : Attribute { } }
How can I create Prototype Methods (like JavaScript) in C#.Net?
How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } How can I go about doing this in C#.Net?
[ "You can't dynamically add methods to existing objects or classes in .NET, except by changing the source for that class.\nYou can, however, in C# 3.0, use extension methods, which look like new methods, but are compile-time magic.\nTo do this for your code:\npublic static class StringExtensions\n{\n public static String trim(this String s)\n {\n return s.Trim();\n }\n}\n\nTo use it:\nString s = \" Test \";\ns = s.trim();\n\nThis looks like a new method, but will compile the exact same way as this code:\nString s = \" Test \";\ns = StringExtensions.trim(s);\n\nWhat exactly are you trying to accomplish? Perhaps there are better ways of doing what you want?\n", "It sounds like you're talking about C#'s Extension Methods. You add functionality to existing classes by inserting the \"this\" keyword before the first parameter. The method has to be a static method in a static class. Strings in .NET already have a \"Trim\" method, so I'll use another example.\npublic static class MyStringEtensions\n{\n public static bool ContainsMabster(this string s)\n {\n return s.Contains(\"Mabster\");\n }\n}\n\nSo now every string has a tremendously useful ContainsMabster method, which I can use like this:\nif (\"Why hello there, Mabster!\".ContainsMabster()) { /* ... */ }\n\nNote that you can also add extension methods to interfaces (eg IList), which means that any class implementing that interface will also pick up that new method.\nAny extra parameters you declare in the extension method (after the first \"this\" parameter) are treated as normal parameters.\n", "You need to create an extension method, which requires .NET 3.5. The method needs to be static, in a static class. The first parameter of the method needs to be prefixed with \"this\" in the signature.\npublic static string MyMethod(this string input)\n{\n // do things\n}\n\nYou can then call it like\n\"asdfas\".MyMethod();\n\n", "Using the 3.5 compiler you can use an Extension Method:\npublic static void Trim(this string s)\n{\n // implementation\n}\n\nYou can use this on a CLR 2.0 targeted project (3.5 compiler) by including this hack:\nnamespace System.Runtime.CompilerServices\n{\n [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class | AttributeTargets.Assembly)]\n public sealed class ExtensionAttribute : Attribute\n {\n }\n}\n\n" ]
[ 22, 5, 0, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000004610_.net_c#.txt
Q: Verifying files for testing I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why. Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts??? A: The only 100% way to figure out if two files are equal is to do a binary comparison of the two. If you can live with the risk of false positives (ie. two files which aren't 100% identical but your code says they are), then the digest and checksum algorithms can be used to lessen the work, particularly if the files lives on two different machines with less than optimal bandwidth so that a binary comparison is infeasible. The digest and checksum algorithms all have chances of false positives, but the exact chance varies with the algorithm. General rule is that the more crypto-made it is, and the more bits it outputs, the less chance of a false positive. Even the CRC-32 algorithm is fairly good to use and it should be easy to find code examples on the internet that implements it. If you only do a size/timestamp comparison then I'm sorry to say that this is easy to circumvent and won't actually give you much of a certainty that the files are the same or different. It depends though, if you know that in your world, timestamps are kept, and only changed when the file is modified, then you can use it, otherwise it holds no guarantee. A: Hashing is very good. But the other, slightly lower tech alternative is to run a diff tool like WinMerge or TextWrangler and compare the two versions of each file. Boring and there's room for human error. Best of all, use version control to ensure the files you're testing are the files you edited and the ones you're going to launch. We have checkout folders from our repo as the staging and live sites, so once you've committed the changes from your working copy, you can be 100% sure that the files you test, push to staging and then live are the same, because you just run "svn update" on each box and check the revision number. Oh, and if you need to roll back in a hurry (it happens to us all sometime or another) you just run svn update again with the -r switch and go back to a previous revision virtually instantly. A: I would do something like an md5sum hash on the files and compare that to the known hashes from the release. They will be more accurate than just date/time comparisons and should be able to be automated more. A: The normal way is to compute a hash of the two files and compare that. MD5 and SHA1 are typical hash algorithms. md5sum should be installed by default on most unix type machines, and Wikipedia's md5sum article has links to some windows implementations. A: You should do a CRC check on each file... from the wiki: Cyclic redundancy check, a type of hash function used to produce a checksum, in order to detect errors in transmission or storage. It produces an almost unique value based on the contents of the file.
Verifying files for testing
I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why. Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts???
[ "The only 100% way to figure out if two files are equal is to do a binary comparison of the two.\nIf you can live with the risk of false positives (ie. two files which aren't 100% identical but your code says they are), then the digest and checksum algorithms can be used to lessen the work, particularly if the files lives on two different machines with less than optimal bandwidth so that a binary comparison is infeasible.\nThe digest and checksum algorithms all have chances of false positives, but the exact chance varies with the algorithm. General rule is that the more crypto-made it is, and the more bits it outputs, the less chance of a false positive.\nEven the CRC-32 algorithm is fairly good to use and it should be easy to find code examples on the internet that implements it.\nIf you only do a size/timestamp comparison then I'm sorry to say that this is easy to circumvent and won't actually give you much of a certainty that the files are the same or different.\nIt depends though, if you know that in your world, timestamps are kept, and only changed when the file is modified, then you can use it, otherwise it holds no guarantee.\n", "Hashing is very good. But the other, slightly lower tech alternative is to run a diff tool like WinMerge or TextWrangler and compare the two versions of each file. Boring and there's room for human error.\nBest of all, use version control to ensure the files you're testing are the files you edited and the ones you're going to launch. We have checkout folders from our repo as the staging and live sites, so once you've committed the changes from your working copy, you can be 100% sure that the files you test, push to staging and then live are the same, because you just run \"svn update\" on each box and check the revision number.\nOh, and if you need to roll back in a hurry (it happens to us all sometime or another) you just run svn update again with the -r switch and go back to a previous revision virtually instantly.\n", "I would do something like an md5sum hash on the files and compare that to the known hashes from the release. They will be more accurate than just date/time comparisons and should be able to be automated more.\n", "The normal way is to compute a hash of the two files and compare that. MD5 and SHA1 are typical hash algorithms. md5sum should be installed by default on most unix type machines, and Wikipedia's md5sum article has links to some windows implementations.\n", "You should do a CRC check on each file... from the wiki:\nCyclic redundancy check, a type of hash function used to produce a checksum, in order to detect errors in transmission or storage.\nIt produces an almost unique value based on the contents of the file.\n" ]
[ 3, 3, 1, 1, 0 ]
[]
[]
[ "testing", "windows" ]
stackoverflow_0000004665_testing_windows.txt
Q: How do you resolve a domain name to an IP address with .NET/C#? How do you resolve a domain name to an IP address with .NET/C#? A: using System.Net; foreach (IPAddress address in Dns.GetHostAddresses("www.google.com")) { Console.WriteLine(address.ToString()); } A: Try using the System.Net.Dns class
How do you resolve a domain name to an IP address with .NET/C#?
How do you resolve a domain name to an IP address with .NET/C#?
[ "using System.Net;\n\nforeach (IPAddress address in Dns.GetHostAddresses(\"www.google.com\"))\n{\n Console.WriteLine(address.ToString());\n}\n\n", "Try using the System.Net.Dns class\n" ]
[ 20, 1 ]
[]
[]
[ ".net", "c#", "dns", "reverse_dns" ]
stackoverflow_0000004816_.net_c#_dns_reverse_dns.txt
Q: How do I configure eclipse (zend studio 6) to hint and code complete several languages? My dream IDE does full code hints, explains and completes PHP, Javascript, HTML and CSS. I know it exists! so far, Zend studio 6, under the Eclipse IDE does a great job at hinting PHP, some Javascript and HTML, any way I can expand this? edit: a bit more information: right now, using zend-6 under eclipse, i type in <?php p //(a single letter "p") and I get a hint tooltip with all the available php functions that begin with "p" (phpinfo(), parse_ini_file(), parse_str(), etc...), each with its own explanation: phpinfo()->"outputs lots of PHP information", the same applies for regular HTML (no explanations however). However, I get nothing when I do: <style> b /* (a single letter "b") */ I'd love it if I could get, from that "b" suggestions for "border", "bottom", etc. The same applies for Javascript. Any ideas? A: I think the JavaScript and CSS need to be in separate files for this to work. Example of CSS autocomplete in Eclipse: Starting to type border Then setting thickness Then choosing the color Chose red, and it added the ; for me Works pretty good IMHO. A: The default CSS and HTML editors for Eclipse are really good. The default javascript editor does an OK job, but it needs a little work. I just tested this in Eclipse 3.3.2 function test(){ } te<CTRL+SPACE> and it completed the method for me as did this: var test = function(){ }; te<CTRL+SPACE> Can you expand on what more you wanted it to do?
How do I configure eclipse (zend studio 6) to hint and code complete several languages?
My dream IDE does full code hints, explains and completes PHP, Javascript, HTML and CSS. I know it exists! so far, Zend studio 6, under the Eclipse IDE does a great job at hinting PHP, some Javascript and HTML, any way I can expand this? edit: a bit more information: right now, using zend-6 under eclipse, i type in <?php p //(a single letter "p") and I get a hint tooltip with all the available php functions that begin with "p" (phpinfo(), parse_ini_file(), parse_str(), etc...), each with its own explanation: phpinfo()->"outputs lots of PHP information", the same applies for regular HTML (no explanations however). However, I get nothing when I do: <style> b /* (a single letter "b") */ I'd love it if I could get, from that "b" suggestions for "border", "bottom", etc. The same applies for Javascript. Any ideas?
[ "I think the JavaScript and CSS need to be in separate files for this to work.\nExample of CSS autocomplete in Eclipse:\nStarting to type border\n\n\n\nThen setting thickness\n\n\n\nThen choosing the color\n\n\n\nChose red, and it added the ; for me\n\n\n\nWorks pretty good IMHO.\n", "The default CSS and HTML editors for Eclipse are really good. The default javascript editor does an OK job, but it needs a little work.\nI just tested this in Eclipse 3.3.2\nfunction test(){\n\n}\n\nte<CTRL+SPACE>\n\nand it completed the method for me as did this:\nvar test = function(){\n\n};\n\n\nte<CTRL+SPACE>\n\nCan you expand on what more you wanted it to do?\n" ]
[ 2, 0 ]
[]
[]
[ "code_completion", "zend_studio" ]
stackoverflow_0000004839_code_completion_zend_studio.txt
Q: What does this error mean SECJ0222E in WebSphere Application Server 5.1 I found this on the IBM support site: ProblemA JAAS LoginContext could not be created due to the unexpected exception. User responseThe problem could be due to a configuration error. but I have no other indication and can't determine the final reason for this error. Any suggestions? A: Have you obtained the fix from http://www-1.ibm.com/support/docview.wss?rs=404&uid=swg1PK17150?
What does this error mean SECJ0222E in WebSphere Application Server 5.1
I found this on the IBM support site: ProblemA JAAS LoginContext could not be created due to the unexpected exception. User responseThe problem could be due to a configuration error. but I have no other indication and can't determine the final reason for this error. Any suggestions?
[ "Have you obtained the fix from\nhttp://www-1.ibm.com/support/docview.wss?rs=404&uid=swg1PK17150?\n" ]
[ 1 ]
[]
[]
[ "jaas", "websphere" ]
stackoverflow_0000004995_jaas_websphere.txt
Q: What point should someone decide to switch Database Systems When developing whether its Web or Desktop at which point should a developer switch from SQLite, MySQL, MS SQL, etc A: It depends on what you are doing. You might switch if: You need more scalability or better performance - say from SQLite to SQL Server or Oracle. You need access to more specific datatypes. You need to support a customer that only runs a particular database. You need better DBA tools. Your application is using a different platform where your database no longer runs, or it's libraries do not run. You have the ability/time/budget to actually make the change. Depending on the situation, the migration could be a bigger project than everything in the project up to that point. Migrations like these are great places to introduce inconsistencies, or to lose data, so a lot of care is required. There are many more reasons for switching and it all depends on your requirements and the attributes of the databases. A: You should switch databases at milestone 2.3433, 3ps prior to the left branch of dendrite 8,151,215. You should switch databases when you have a reason to do so, would be my advice. If your existing database is performing to your expectations, supports the load that is being placed on it by your production systems, has the features you require in your applications and you aren't bored with it, why change? However, if you find your application isn't scaling, or you are designing an application that has high load or scalability requirements and your research tells you your current database platform is weak in that area, or, as was already mentioned, you need some spatial analysis or feature that a particular database has, well there you go. Another consideration might be taking up the use of a database agnostic ORM tool that can allow you to experiment freely with different database platforms with a simple configuration setting. That was the trigger for us to consider trying out something new in the DB department. If our application can handle any DB the ORM can handle, why pay licensing fees on a commercial database when an open source DB works just as well for the levels of performance we require? The bottom line, though, is that with databases or any other technology, I think there are no "business rules" that will tell you when it is time to switch - your scenario will tell you it is time to switch because something in your solution won't be quite right, and if you aren't at that point, no need to change. A: BrianLy hit the nail on the head, but I'd also add that you may end up using different databases at different levels of development. It's not uncommon for developers to use SQLite on their workstation when they're coding against their personal development server, and then have the staging and/or production sites using a different database tool. Of course, if you're using extensions or capabilities specific to a certain database tool (say, PostGIS in PostGreSQL), then obviously that wouldn't work.
What point should someone decide to switch Database Systems
When developing whether its Web or Desktop at which point should a developer switch from SQLite, MySQL, MS SQL, etc
[ "It depends on what you are doing. You might switch if:\n\nYou need more scalability or better performance - say from SQLite to SQL Server or Oracle.\nYou need access to more specific datatypes.\nYou need to support a customer that only runs a particular database.\nYou need better DBA tools.\nYour application is using a different platform where your database no longer runs, or it's libraries do not run.\nYou have the ability/time/budget to actually make the change. Depending on the situation, the migration could be a bigger project than everything in the project up to that point. Migrations like these are great places to introduce inconsistencies, or to lose data, so a lot of care is required.\n\nThere are many more reasons for switching and it all depends on your requirements and the attributes of the databases.\n", "You should switch databases at milestone 2.3433, 3ps prior to the left branch of dendrite 8,151,215.\nYou should switch databases when you have a reason to do so, would be my advice. If your existing database is performing to your expectations, supports the load that is being placed on it by your production systems, has the features you require in your applications and you aren't bored with it, why change? However, if you find your application isn't scaling, or you are designing an application that has high load or scalability requirements and your research tells you your current database platform is weak in that area, or, as was already mentioned, you need some spatial analysis or feature that a particular database has, well there you go. \nAnother consideration might be taking up the use of a database agnostic ORM tool that can allow you to experiment freely with different database platforms with a simple configuration setting. That was the trigger for us to consider trying out something new in the DB department. If our application can handle any DB the ORM can handle, why pay licensing fees on a commercial database when an open source DB works just as well for the levels of performance we require?\nThe bottom line, though, is that with databases or any other technology, I think there are no \"business rules\" that will tell you when it is time to switch - your scenario will tell you it is time to switch because something in your solution won't be quite right, and if you aren't at that point, no need to change.\n", "BrianLy hit the nail on the head, but I'd also add that you may end up using different databases at different levels of development. It's not uncommon for developers to use SQLite on their workstation when they're coding against their personal development server, and then have the staging and/or production sites using a different database tool.\nOf course, if you're using extensions or capabilities specific to a certain database tool (say, PostGIS in PostGreSQL), then obviously that wouldn't work.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "database", "sql" ]
stackoverflow_0000004891_database_sql.txt
Q: Error ADMA5026E for WebSphere Application Server Network Deployment What I'm doing wrong that I get the ADMA5026E error when deployment an application with the NetworkDeployment Console? A: Try IBM Information Center
Error ADMA5026E for WebSphere Application Server Network Deployment
What I'm doing wrong that I get the ADMA5026E error when deployment an application with the NetworkDeployment Console?
[ "Try\nIBM Information Center\n" ]
[ 1 ]
[]
[]
[ "deployment", "websphere" ]
stackoverflow_0000005013_deployment_websphere.txt
Q: Upload form does not work in Firefox 3 with Mac OS X? Today, I ran into this weird problem with a user using Mac OS X. This user always had a failed upload. The form uses a regular "input type=file". The user could upload using any browser except Firefox 3 on his Mac. Only this particular user was seeing this error. Obviously, the problem is only with this one particular user. A: User corrected this weird problem by recreating their FireFox profile. How to manage FireFox profiles I imagine a re-install of FireFox would have corrected the problem as well. A: I imagine a re-install of FireFox would have corrected the problem as well. Profile related problems cannot usually be solved by re-installing Firefox since reinstalling (or upgrading) would re-use the same "damaged" profile.
Upload form does not work in Firefox 3 with Mac OS X?
Today, I ran into this weird problem with a user using Mac OS X. This user always had a failed upload. The form uses a regular "input type=file". The user could upload using any browser except Firefox 3 on his Mac. Only this particular user was seeing this error. Obviously, the problem is only with this one particular user.
[ "User corrected this weird problem by recreating their FireFox profile.\nHow to manage FireFox profiles\nI imagine a re-install of FireFox would have corrected the problem as well.\n", "\nI imagine a re-install of FireFox would have corrected the problem as well.\n\nProfile related problems cannot usually be solved by re-installing Firefox since reinstalling (or upgrading) would re-use the same \"damaged\" profile.\n" ]
[ 2, 0 ]
[]
[]
[ "firefox", "macos", "upload" ]
stackoverflow_0000005084_firefox_macos_upload.txt
Q: LINQ to SQL strings to enums LINQ to SQL allows table mappings to automatically convert back and forth to Enums by specifying the type for the column - this works for strings or integers. Is there a way to make the conversion case insensitive or add a custom mapping class or extenstion method into the mix so that I can specify what the string should look like in more detail. Reasons for doing so might be in order to supply a nicer naming convention inside some new funky C# code in a system where the data schema is already set (and is being relied upon by some legacy apps) so the actual text in the database can't be changed. A: You can always add a partial class with the same name as your LinqToSql class, and then define your own parameters and functions. These will then be accessible as object parameters and methods for this object, the same way as the auto-generated LinqToSql methods are accessible. Example: You have a LinqToSql class named Car which maps to the Car table in the DB. You can then add a file to App_Code with the following code in it: public partial class Car { // Add properties and methods to extend the functionality of Car } I am not sure if this totally meets your requirement of changing the way that Enums are mapped into a column. However, you could add a parameter where the get/set properties will work to map the enums that you need while keeping things case-insensitive.
LINQ to SQL strings to enums
LINQ to SQL allows table mappings to automatically convert back and forth to Enums by specifying the type for the column - this works for strings or integers. Is there a way to make the conversion case insensitive or add a custom mapping class or extenstion method into the mix so that I can specify what the string should look like in more detail. Reasons for doing so might be in order to supply a nicer naming convention inside some new funky C# code in a system where the data schema is already set (and is being relied upon by some legacy apps) so the actual text in the database can't be changed.
[ "You can always add a partial class with the same name as your LinqToSql class, and then define your own parameters and functions. These will then be accessible as object parameters and methods for this object, the same way as the auto-generated LinqToSql methods are accessible.\nExample: You have a LinqToSql class named Car which maps to the Car table in the DB. You can then add a file to App_Code with the following code in it:\npublic partial class Car {\n // Add properties and methods to extend the functionality of Car\n}\n\nI am not sure if this totally meets your requirement of changing the way that Enums are mapped into a column. However, you could add a parameter where the get/set properties will work to map the enums that you need while keeping things case-insensitive.\n" ]
[ 3 ]
[]
[]
[ "linq_to_sql" ]
stackoverflow_0000004939_linq_to_sql.txt
Q: Timer-based event triggers I am currently working on a project with specific requirements. A brief overview of these are as follows: Data is retrieved from external webservices Data is stored in SQL 2005 Data is manipulated via a web GUI The windows service that communicates with the web services has no coupling with our internal web UI, except via the database. Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI. The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this. 1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger. or 2) Create a second windows service that creates the triggers on-the-fly at timed intervals. The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll) I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take. A: Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db "trigger" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval. A: The way I see it is this. You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases. So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger. I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action. You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI. A: @Vaibhav Unfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business* *Isn't it always the way that a technically "better" solution is stymied by external factors?
Timer-based event triggers
I am currently working on a project with specific requirements. A brief overview of these are as follows: Data is retrieved from external webservices Data is stored in SQL 2005 Data is manipulated via a web GUI The windows service that communicates with the web services has no coupling with our internal web UI, except via the database. Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI. The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this. 1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger. or 2) Create a second windows service that creates the triggers on-the-fly at timed intervals. The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll) I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take.
[ "Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db \"trigger\" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval.\n", "The way I see it is this.\nYou have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases.\nSo, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger.\nI don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action.\nYou could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.\n", "@Vaibhav\nUnfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business*\n*Isn't it always the way that a technically \"better\" solution is stymied by external factors?\n" ]
[ 3, 0, 0 ]
[]
[]
[ "service", "sql", "timer", "triggers", "web_services" ]
stackoverflow_0000003272_service_sql_timer_triggers_web_services.txt
Q: How to set up a CSS switcher I'm working on a website that will switch to a new style on a set date. The site's built-in semantic HTML and CSS, so the change should just require a CSS reference change. I'm working with a designer who will need to be able to see how it's looking, as well as a client who will need to be able to review content updates in the current look as well as design progress on the new look. I'm planning to use a magic querystring value and/or a javascript link in the footer which writes out a cookie to select the new CSS page. We're working in ASP.NET 3.5. Any recommendations? I should mention that we're using IE Conditional Comments for IE8, 7, and 6 support. I may create a function that does a replacement: <link href="Style/<% GetCssRoot() %>.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie8.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 7]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie7.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 6]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie6.css" rel="stylesheet" /> <![endif]--> A: In Asp.net 3.5, you should be able to set up the Link tag in the header as a server tag. Then in the codebehind you can set the href property for the link element, based on a cookie value, querystring, date, etc. In your aspx file: <head> <link id="linkStyles" rel="stylesheet" type="text/css" runat="server" /> </head> And in the Code behind: protected void Page_Load(object sender, EventArgs e) { string stylesheetAddress = // logic to determine stylesheet linkStyles.Href = stylesheetAddress; } A: You should look into ASP.NET themes, that's exactly what they're used for. They also allow you to skin controls, which means give them a set of default attributes. A: I would suggest storing the stylesheet selection in the session so you don't have to rely on the querystring key being present all the time. You can check the session in Page_Load and add the appropriate stylesheet reference. It sounds like this is a temporary/development situation, so go with whatever is easy and works. if (!String.IsNullOrEmpty(Request.QueryString["css"])) Session.Add("CSS",Request.QueryString["css"]);
How to set up a CSS switcher
I'm working on a website that will switch to a new style on a set date. The site's built-in semantic HTML and CSS, so the change should just require a CSS reference change. I'm working with a designer who will need to be able to see how it's looking, as well as a client who will need to be able to review content updates in the current look as well as design progress on the new look. I'm planning to use a magic querystring value and/or a javascript link in the footer which writes out a cookie to select the new CSS page. We're working in ASP.NET 3.5. Any recommendations? I should mention that we're using IE Conditional Comments for IE8, 7, and 6 support. I may create a function that does a replacement: <link href="Style/<% GetCssRoot() %>.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie8.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 7]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie7.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 6]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie6.css" rel="stylesheet" /> <![endif]-->
[ "In Asp.net 3.5, you should be able to set up the Link tag in the header as a server tag. Then in the codebehind you can set the href property for the link element, based on a cookie value, querystring, date, etc.\nIn your aspx file:\n<head>\n <link id=\"linkStyles\" rel=\"stylesheet\" type=\"text/css\" runat=\"server\" />\n</head>\n\nAnd in the Code behind:\nprotected void Page_Load(object sender, EventArgs e) {\n string stylesheetAddress = // logic to determine stylesheet\n linkStyles.Href = stylesheetAddress;\n}\n\n", "You should look into ASP.NET themes, that's exactly what they're used for. They also allow you to skin controls, which means give them a set of default attributes.\n", "I would suggest storing the stylesheet selection in the session so you don't have to rely on the querystring key being present all the time. You can check the session in Page_Load and add the appropriate stylesheet reference. It sounds like this is a temporary/development situation, so go with whatever is easy and works.\nif (!String.IsNullOrEmpty(Request.QueryString[\"css\"]))\n Session.Add(\"CSS\",Request.QueryString[\"css\"]);\n\n" ]
[ 22, 6, 2 ]
[ "I would do the following:\nwww.website.com/?stylesheet=new.css\nThen in your ASP.NET code:\nif (Request.Querystring[\"stylesheet\"] != null) {\n Response.Cookies[\"stylesheet\"].Value = Request.QueryString[\"stylesheet\"];\n Response.Redirect(<Current Page>);\n}\n\nThen where you define your stylesheets:\nif (Request.Cookies[\"stylesheet\"] != null) {\n // New Stylesheet\n} else {\n // Default\n}\n\n" ]
[ -2 ]
[ "asp.net", "css", "html", "javascript" ]
stackoverflow_0000005118_asp.net_css_html_javascript.txt
Q: Is this really widening vs autoboxing? I saw this in an answer to another question, in reference to shortcomings of the Java spec: There are more shortcomings and this is a subtle topic. Check this out: public class methodOverloading{ public static void hello(Integer x){ System.out.println("Integer"); } public static void hello(long x){ System.out.println("long"); } public static void main(String[] args){ int i = 5; hello(i); } } Here "long" would be printed (haven't checked it myself), because the compiler chooses widening over auto-boxing. Be careful when using auto-boxing or don't use it at all! Are we sure that this is actually an example of widening instead of autoboxing, or is it something else entirely? On my initial scanning, I would agree with the statement that the output would be "long" on the basis of i being declared as a primitive and not an object. However, if you changed hello(long x) to hello(Long x) the output would print "Integer" What's really going on here? I know nothing about the compilers/bytecode interpreters for java... A: In the first case, you have a widening conversion happening. This can be see when runinng the "javap" utility program (included w/ the JDK), on the compiled class: public static void main(java.lang.String[]); Code: 0: iconst_ 5 1: istore_ 1 2: iload_ 1 3: i2l 4: invokestatic #6; //Method hello:(J)V 7: return } Clearly, you see the I2L, which is the mnemonic for the widening Integer-To-Long bytecode instruction. See reference here. And in the other case, replacing the "long x" with the object "Long x" signature, you'll have this code in the main method: public static void main(java.lang.String[]); Code: 0: iconst_ 5 1: istore_ 1 2: iload_ 1 3: invokestatic #6; //Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer; 6: invokestatic #7; //Method hello:(Ljava/lang/Integer;)V 9: return } So you see the compiler has created the instruction Integer.valueOf(int), to box the primitive inside the wrapper. A: Yes it is, try it out in a test. You will see "long" printed. It is widening because Java will choose to widen the int into a long before it chooses to autobox it to an Integer, so the hello(long) method is chosen to be called. Edit: the original post being referenced. Further Edit: The reason the second option would print Integer is because there is no "widening" into a larger primitive as an option, so it MUST box it up, thus Integer is the only option. Furthermore, java will only autobox to the original type, so it would give a compiler error if you leave the hello(Long) and removed hello(Integer). A: Another interesting thing with this example is the method overloading. The combination of type widening and method overloading only working because the compiler has to make a decision of which method to choose. Consider the following example: public static void hello(Collection x){ System.out.println("Collection"); } public static void hello(List x){ System.out.println("List"); } public static void main(String[] args){ Collection col = new ArrayList(); hello(col); } It doesn't use the run-time type which is List, it uses the compile-time type which is Collection and thus prints "Collection". I encourage your to read Effective Java, which opened my eyes to some corner cases of the JLS.
Is this really widening vs autoboxing?
I saw this in an answer to another question, in reference to shortcomings of the Java spec: There are more shortcomings and this is a subtle topic. Check this out: public class methodOverloading{ public static void hello(Integer x){ System.out.println("Integer"); } public static void hello(long x){ System.out.println("long"); } public static void main(String[] args){ int i = 5; hello(i); } } Here "long" would be printed (haven't checked it myself), because the compiler chooses widening over auto-boxing. Be careful when using auto-boxing or don't use it at all! Are we sure that this is actually an example of widening instead of autoboxing, or is it something else entirely? On my initial scanning, I would agree with the statement that the output would be "long" on the basis of i being declared as a primitive and not an object. However, if you changed hello(long x) to hello(Long x) the output would print "Integer" What's really going on here? I know nothing about the compilers/bytecode interpreters for java...
[ "In the first case, you have a widening conversion happening. This can be see when runinng the \"javap\" utility program (included w/ the JDK), on the compiled class:\npublic static void main(java.lang.String[]);\n Code:\n 0: iconst_ 5\n 1: istore_ 1\n 2: iload_ 1\n 3: i2l\n 4: invokestatic #6; //Method hello:(J)V\n 7: return\n\n}\n\nClearly, you see the I2L, which is the mnemonic for the widening Integer-To-Long bytecode instruction. See reference here.\nAnd in the other case, replacing the \"long x\" with the object \"Long x\" signature, you'll have this code in the main method:\npublic static void main(java.lang.String[]);\n Code:\n 0: iconst_ 5\n 1: istore_ 1\n 2: iload_ 1\n 3: invokestatic #6; //Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;\n 6: invokestatic #7; //Method hello:(Ljava/lang/Integer;)V\n 9: return\n\n}\n\nSo you see the compiler has created the instruction Integer.valueOf(int), to box the primitive inside the wrapper.\n", "Yes it is, try it out in a test. You will see \"long\" printed. It is widening because Java will choose to widen the int into a long before it chooses to autobox it to an Integer, so the hello(long) method is chosen to be called.\nEdit: the original post being referenced.\nFurther Edit: The reason the second option would print Integer is because there is no \"widening\" into a larger primitive as an option, so it MUST box it up, thus Integer is the only option. Furthermore, java will only autobox to the original type, so it would give a compiler error if you leave the hello(Long) and removed hello(Integer).\n", "Another interesting thing with this example is the method overloading. The combination of type widening and method overloading only working because the compiler has to make a decision of which method to choose. Consider the following example:\npublic static void hello(Collection x){\n System.out.println(\"Collection\");\n}\n\npublic static void hello(List x){\n System.out.println(\"List\");\n}\n\npublic static void main(String[] args){\n Collection col = new ArrayList();\n hello(col);\n}\n\nIt doesn't use the run-time type which is List, it uses the compile-time type which is Collection and thus prints \"Collection\". \nI encourage your to read Effective Java, which opened my eyes to some corner cases of the JLS.\n" ]
[ 16, 5, 3 ]
[]
[]
[ "autoboxing", "java", "primitive" ]
stackoverflow_0000004922_autoboxing_java_primitive.txt
Q: How do you pull the URL for an ASP.NET web reference from a configuration file in Visual Studio 2008? I have a web reference for our report server embedded in our application. The server that the reports live on could change though, and I'd like to be able to change it "on the fly" if necessary. I know I've done this before, but can't seem to remember how. Thanks for your help. I've manually driven around this for the time being. It's not a big deal to set the URL in the code, but I'd like to figure out what the "proper" way of doing this in VS 2008 is. Could anyone provide any further insights? Thanks! In VS2008 when I change the URL Behavior property to Dynamic I get the following code auto-generated in the Reference class. Can I override this setting (MySettings) in the web.config? I guess I don't know how the settings stuff works. Public Sub New() MyBase.New Me.Url = Global.My.MySettings.Default.Namespace_Reference_ServiceName If (Me.IsLocalFileSystemWebService(Me.Url) = true) Then Me.UseDefaultCredentials = true Me.useDefaultCredentialsSetExplicitly = false Else Me.useDefaultCredentialsSetExplicitly = true End If End Sub EDIT So this stuff has changed a bit since VS03 (which was probably the last VS version I used to do this). According to: http://msdn.microsoft.com/en-us/library/a65txexh.aspx it looks like I have a settings object on which I can set the property programatically, but that I would need to provide the logic to retrieve that URL from the web.config. Is this the new standard way of doing this in VS2008, or am I missing something? EDIT #2 Anyone have any ideas here? I drove around it in my application and just put the URL in my web.config myself and read it out. But I'm not happy with that because it still feels like I'm missing something. A: In the properties window change the "behavior" to Dynamic. See: http://www.codeproject.com/KB/XML/wsdldynamicurl.aspx A: If you mean a VS2005 "Web Reference", then the generated proxy classes have a URL property that is the SOAP endpoint url of that service. You can change this property and have your subsequent http communications be made to that new endpoint. Edit: Ah, thanks bcaff86. I didn't know you could do that simply by changing a property.
How do you pull the URL for an ASP.NET web reference from a configuration file in Visual Studio 2008?
I have a web reference for our report server embedded in our application. The server that the reports live on could change though, and I'd like to be able to change it "on the fly" if necessary. I know I've done this before, but can't seem to remember how. Thanks for your help. I've manually driven around this for the time being. It's not a big deal to set the URL in the code, but I'd like to figure out what the "proper" way of doing this in VS 2008 is. Could anyone provide any further insights? Thanks! In VS2008 when I change the URL Behavior property to Dynamic I get the following code auto-generated in the Reference class. Can I override this setting (MySettings) in the web.config? I guess I don't know how the settings stuff works. Public Sub New() MyBase.New Me.Url = Global.My.MySettings.Default.Namespace_Reference_ServiceName If (Me.IsLocalFileSystemWebService(Me.Url) = true) Then Me.UseDefaultCredentials = true Me.useDefaultCredentialsSetExplicitly = false Else Me.useDefaultCredentialsSetExplicitly = true End If End Sub EDIT So this stuff has changed a bit since VS03 (which was probably the last VS version I used to do this). According to: http://msdn.microsoft.com/en-us/library/a65txexh.aspx it looks like I have a settings object on which I can set the property programatically, but that I would need to provide the logic to retrieve that URL from the web.config. Is this the new standard way of doing this in VS2008, or am I missing something? EDIT #2 Anyone have any ideas here? I drove around it in my application and just put the URL in my web.config myself and read it out. But I'm not happy with that because it still feels like I'm missing something.
[ "In the properties window change the \"behavior\" to Dynamic.\nSee: http://www.codeproject.com/KB/XML/wsdldynamicurl.aspx\n", "If you mean a VS2005 \"Web Reference\", then the generated proxy classes have a URL property that is the SOAP endpoint url of that service. You can change this property and have your subsequent http communications be made to that new endpoint.\nEdit: Ah, thanks bcaff86. I didn't know you could do that simply by changing a property.\n" ]
[ 3, 0 ]
[]
[]
[ "asmx" ]
stackoverflow_0000005188_asmx.txt
Q: How can I dynamically center an image in a MS Reporting Services report? Out of the box, in MS Reporting Services, the image element does not allow for the centering of the image itself, when the dimensions are unknown at design time. In other words, the image (if smaller than the dimensions allotted on the design surface) will be anchored to the top left corner, not in the center. My report will know the URL of the image at runtime, and I need to be able to center this image if it is smaller than the dimensions specified in my designer. A: Here is how I was able to accomplish this. With help from Chris Hays Size the image to be as big as you would want it on the report, change "Sizing" property to "Clip". Dynamically set the image's left padding using an expression: =CStr(Round((4.625-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Width/96)/2,2)) & "in" Dynamically set the image's top padding using an expression: =CStr(Round((1.125-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Height/96)/2,2)) & "in" The first modification made to Chris's code was to swap out the dimensions of my image element on the report (my image was 4.625x1.125 - see numbers above). I also chose to get the stream from a URL instead of the database. I used WebRequest.Create.GetResponse.GetResponseStream do to so. So far so good - I Hope that helps!
How can I dynamically center an image in a MS Reporting Services report?
Out of the box, in MS Reporting Services, the image element does not allow for the centering of the image itself, when the dimensions are unknown at design time. In other words, the image (if smaller than the dimensions allotted on the design surface) will be anchored to the top left corner, not in the center. My report will know the URL of the image at runtime, and I need to be able to center this image if it is smaller than the dimensions specified in my designer.
[ "Here is how I was able to accomplish this. With help from Chris Hays\nSize the image to be as big as you would want it on the report, change \"Sizing\" property to \"Clip\".\nDynamically set the image's left padding using an expression:\n=CStr(Round((4.625-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Width/96)/2,2)) & \"in\"\n\nDynamically set the image's top padding using an expression:\n=CStr(Round((1.125-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Height/96)/2,2)) & \"in\"\n\nThe first modification made to Chris's code was to swap out the dimensions of my image element on the report (my image was 4.625x1.125 - see numbers above).\nI also chose to get the stream from a URL instead of the database. I used WebRequest.Create.GetResponse.GetResponseStream do to so.\nSo far so good - I Hope that helps!\n" ]
[ 8 ]
[]
[]
[ "reporting_services" ]
stackoverflow_0000005264_reporting_services.txt
Q: HTML Comments Markup I am currently in the process of creating my own blog and I have got to marking up the comments, but what is the best way to mark it up? The information I need to present is: Persons Name Gravatar Icon Comment Date The Comment PS: I'm only interested in semantic HTML markup. A: I think that your version with the cite, blockquote, etc. would definitely work, but if semantics is your main concern then I personally wouldn't use cite and blockquote as they have specific things that they are supposed to represent. The blockquote tag is meant to represent a quotation taken from another source and the cite tag is meant to represent a source of information (like a magazine, newspaper, etc.). I think an argument can certainly made that you can use semantic HTML with class names, provided they are meaningful. This article on Plain Old Semantic HTML makes a reference to using class names - http://www.fooclass.com/plain_old_semantic_html A: Here's one way you could do it with the following CSS to float the picture to the left of the contents: .comment { width: 400px; } .comment_img { float: left; } .comment_text, .comment_meta { margin-left: 40px; } .comment_meta { clear: both; } <div class='comment' id='comment_(comment id #)'> <div class='comment_img'> <img src='https://placehold.it/100' alt='(Commenter Name)' /> </div> <div class='comment_text'> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p> </div> <p class='comment_meta'> By <a href='#'>Name</a> on <span class='comment_date'>2008-08-21 11:32 AM</span> </p> </div> A: I was perhaps thinking of something like this: <ol class="comments"> <li> <a href=""> <img src="" alt="" /> </a> <cite>Name<br />Date</cite> <blockquote>Comment</blockquote> </li> </ol> It's very semantic without using div's and only one class. The list shows the order the comments were made, a link to the persons website, and image for their gravatar, the cite tag to site who said the comment and blockquote to hold what they said. A: I don't know that there's markup that would necessarily represent the comment structure well without using divs or classes as well, but you could use definition lists. You can use multiple dt and dd tags in the context of a definition list - see 10.3 Definition lists: the DL, DT, and DD elements. <dl> <dt>By [Name] at 2008-01-01<dt> <dd><img src='...' alt=''/></dd> <dd><p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p> </dd> </dl> The concern I'd have with an approach like this is that it could be difficult to uniquely identify the elements with CSS for styling purposes. You could use JavaScript (jQuery would be great here) to find and apply styles. Without full CSS selector support across browsers (Internet Explorer), it would be tougher to style. A: I see your point. OK, after reading through that article, why don't you try something like this? <blockquote cite="http://yoursite/comments/feederscript.php?id=commentid" title="<?php echo Name . " - " . Date ?>" > <?php echo Comment ?> </blockquote> with some snazzy CSS to make it look nice. feederscript.php would be something that could read from the database and echo only the commentid called for.
HTML Comments Markup
I am currently in the process of creating my own blog and I have got to marking up the comments, but what is the best way to mark it up? The information I need to present is: Persons Name Gravatar Icon Comment Date The Comment PS: I'm only interested in semantic HTML markup.
[ "I think that your version with the cite, blockquote, etc. would definitely work, but if semantics is your main concern then I personally wouldn't use cite and blockquote as they have specific things that they are supposed to represent.\nThe blockquote tag is meant to represent a quotation taken from another source and the cite tag is meant to represent a source of information (like a magazine, newspaper, etc.).\nI think an argument can certainly made that you can use semantic HTML with class names, provided they are meaningful. This article on Plain Old Semantic HTML makes a reference to using class names - http://www.fooclass.com/plain_old_semantic_html\n", "Here's one way you could do it with the following CSS to float the picture to the left of the contents:\n\n\n.comment {\r\n width: 400px;\r\n}\r\n\r\n.comment_img {\r\n float: left;\r\n}\r\n\r\n.comment_text,\r\n.comment_meta {\r\n margin-left: 40px;\r\n}\r\n\r\n.comment_meta {\r\n clear: both;\r\n}\n<div class='comment' id='comment_(comment id #)'>\r\n <div class='comment_img'>\r\n <img src='https://placehold.it/100' alt='(Commenter Name)' />\r\n </div>\r\n <div class='comment_text'>\r\n <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p>\r\n <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p>\r\n </div>\r\n <p class='comment_meta'>\r\n By <a href='#'>Name</a> on <span class='comment_date'>2008-08-21 11:32 AM</span>\r\n </p>\r\n</div>\n\n\n\n", "I was perhaps thinking of something like this:\n<ol class=\"comments\">\n <li>\n <a href=\"\">\n <img src=\"\" alt=\"\" />\n </a>\n <cite>Name<br />Date</cite>\n <blockquote>Comment</blockquote>\n </li>\n</ol>\n\nIt's very semantic without using div's and only one class. The list shows the order the comments were made, a link to the persons website, and image for their gravatar, the cite tag to site who said the comment and blockquote to hold what they said.\n", "I don't know that there's markup that would necessarily represent the comment structure well without using divs or classes as well, but you could use definition lists. You can use multiple dt and dd tags in the context of a definition list - see 10.3 Definition lists: the DL, DT, and DD elements.\n<dl>\n <dt>By [Name] at 2008-01-01<dt>\n <dd><img src='...' alt=''/></dd>\n <dd><p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p>\n\n <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p>\n </dd>\n</dl>\n\nThe concern I'd have with an approach like this is that it could be difficult to uniquely identify the elements with CSS for styling purposes. You could use JavaScript (jQuery would be great here) to find and apply styles. Without full CSS selector support across browsers (Internet Explorer), it would be tougher to style.\n", "I see your point. OK, after reading through that article, why don't you try something like this?\n<blockquote \n cite=\"http://yoursite/comments/feederscript.php?id=commentid\" \n title=\"<?php echo Name . \" - \" . Date ?>\" >\n <?php echo Comment ?>\n</blockquote>\n\nwith some snazzy CSS to make it look nice.\nfeederscript.php would be something that could read from the database and echo only the commentid called for.\n" ]
[ 7, 2, 2, 1, 1 ]
[]
[]
[ "html", "semantic_markup" ]
stackoverflow_0000005226_html_semantic_markup.txt
Q: SQL Server 2008 FileStream on a Web Server I've been developing a site using ASP.NET MVC, and have decided to use the new SQL Server 2008 FILESTREAM facility to store files 'within' the database rather than as separate entities. While initially working within VS2008 (using a trusted connection to the database), everything was fine and dandy. Issues arose, however, when I shifted the site to IIS7 and changed over to SQL authentication on the database. It seems that streaming a FILESTREAM doesn't work with SQL authentication, only with Windows authentication. Given this, what is the best practice to follow? Is there a way to force this sort of thing to work under SQL authentication? Should I add NETWORK SERVICE as a database user and then use Trusted authentication? Should I create another user, and run both the IIS site and the database connection under this? Any other suggestions? A: Take a look at this article. I don't know a whole lot about FileStreaming and security, but there are a couple of interesting options in the FileStreaming setup such as allowing remote connections and allow remote clients to access FileStreaming
SQL Server 2008 FileStream on a Web Server
I've been developing a site using ASP.NET MVC, and have decided to use the new SQL Server 2008 FILESTREAM facility to store files 'within' the database rather than as separate entities. While initially working within VS2008 (using a trusted connection to the database), everything was fine and dandy. Issues arose, however, when I shifted the site to IIS7 and changed over to SQL authentication on the database. It seems that streaming a FILESTREAM doesn't work with SQL authentication, only with Windows authentication. Given this, what is the best practice to follow? Is there a way to force this sort of thing to work under SQL authentication? Should I add NETWORK SERVICE as a database user and then use Trusted authentication? Should I create another user, and run both the IIS site and the database connection under this? Any other suggestions?
[ "Take a look at this article. I don't know a whole lot about FileStreaming and security, but there are a couple of interesting options in the FileStreaming setup such as allowing remote connections and allow remote clients to access FileStreaming\n" ]
[ 2 ]
[]
[]
[ "iis", "sql_server", "sql_server_2008" ]
stackoverflow_0000005396_iis_sql_server_sql_server_2008.txt
Q: What is the best way to wrap time around the work day? I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything. The hours to be added is called "delay". It could easily be a parameter to the function instead. Please post any suggestions. [VB.NET Warning] Private Function GetDateRequired() As Date ''// A decimal representation of the current hour Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0) Dim delay As Decimal = 3.0 ''// delay in hours Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours Dim startOfDay As Decimal = 8.0 ''// start of day, in hours Dim newHour As Integer Dim newMinute As Integer Dim dateRequired As Date = Now Dim delta As Decimal = hours + delay ''// Wrap around to the next day, if necessary If delta > endOfDay Then delta = delta - endOfDay dateRequired = dateRequired.AddDays(1) newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) newHour = startOfDay + newHour Else newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) End If dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0) Return dateRequired End Sub Note: This will probably not work if delay is more than 9 hours long. It should never change from 3, through. EDIT: The goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow. A: Okay, how about these? The difference between the approaches should speak for themselves. Also, this is tested about as far as I can throw it. The warranty lasts until... now. Hope it helps! Module Module1 Public Function IsInBusinessHours(ByVal d As Date) As Boolean Return Not (d.Hour < 8 OrElse d.Hour > 17 OrElse d.DayOfWeek = DayOfWeek.Saturday OrElse d.DayOfWeek = DayOfWeek.Sunday) End Function Public Function AddInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date Dim work As Date = fromDate.AddHours(hours) While Not IsInBusinessHours(work) work = work.AddHours(1) End While Return work End Function Public Function LoopInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date Dim work As Date = fromDate While hours > 0 While hours > 0 AndAlso IsInBusinessHours(work) work = work.AddHours(1) hours -= 1 End While While Not IsInBusinessHours(work) work = work.AddHours(1) End While End While Return work End Function Sub Main() Dim test As Date = New Date(2008, 8, 8, 15, 0, 0) Dim hours As Integer = 5 Console.WriteLine("Date: " + test.ToString() + ", " + hours.ToString()) Console.WriteLine("Just skipping: " + AddInBusinessHours(test, hours)) Console.WriteLine("Looping: " + LoopInBusinessHours(test, hours)) Console.ReadLine() End Sub End Module A: You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results. A: I've worked with the following formula (pseudocode) with some success: now <- number of minutes since the work day started delay <- number of minutes in the delay day <- length of a work day in minutes x <- (now + delay) / day {integer division} y <- (now + delay) % day {modulo remainder} return startoftoday + x {in days} + y {in minutes}
What is the best way to wrap time around the work day?
I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything. The hours to be added is called "delay". It could easily be a parameter to the function instead. Please post any suggestions. [VB.NET Warning] Private Function GetDateRequired() As Date ''// A decimal representation of the current hour Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0) Dim delay As Decimal = 3.0 ''// delay in hours Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours Dim startOfDay As Decimal = 8.0 ''// start of day, in hours Dim newHour As Integer Dim newMinute As Integer Dim dateRequired As Date = Now Dim delta As Decimal = hours + delay ''// Wrap around to the next day, if necessary If delta > endOfDay Then delta = delta - endOfDay dateRequired = dateRequired.AddDays(1) newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) newHour = startOfDay + newHour Else newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) End If dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0) Return dateRequired End Sub Note: This will probably not work if delay is more than 9 hours long. It should never change from 3, through. EDIT: The goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow.
[ "Okay, how about these? The difference between the approaches should speak for themselves.\nAlso, this is tested about as far as I can throw it. The warranty lasts until... now.\nHope it helps!\nModule Module1\n\n Public Function IsInBusinessHours(ByVal d As Date) As Boolean\n Return Not (d.Hour < 8 OrElse d.Hour > 17 OrElse d.DayOfWeek = DayOfWeek.Saturday OrElse d.DayOfWeek = DayOfWeek.Sunday)\n End Function\n\n\n Public Function AddInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\n Dim work As Date = fromDate.AddHours(hours)\n While Not IsInBusinessHours(work)\n work = work.AddHours(1)\n End While\n Return work\n End Function\n\n\n Public Function LoopInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date\n Dim work As Date = fromDate\n While hours > 0\n While hours > 0 AndAlso IsInBusinessHours(work)\n work = work.AddHours(1)\n hours -= 1\n End While\n While Not IsInBusinessHours(work)\n work = work.AddHours(1)\n End While\n End While\n Return work\n End Function\n\n Sub Main()\n Dim test As Date = New Date(2008, 8, 8, 15, 0, 0)\n Dim hours As Integer = 5\n Console.WriteLine(\"Date: \" + test.ToString() + \", \" + hours.ToString())\n Console.WriteLine(\"Just skipping: \" + AddInBusinessHours(test, hours))\n Console.WriteLine(\"Looping: \" + LoopInBusinessHours(test, hours))\n Console.ReadLine()\n End Sub\n\nEnd Module\n\n", "You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results.\n", "I've worked with the following formula (pseudocode) with some success:\nnow <- number of minutes since the work day started\ndelay <- number of minutes in the delay\nday <- length of a work day in minutes\n\nx <- (now + delay) / day {integer division}\ny <- (now + delay) % day {modulo remainder}\n\nreturn startoftoday + x {in days} + y {in minutes}\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "date", "vb.net" ]
stackoverflow_0000005260_date_vb.net.txt
Q: How do I create a Class using the Singleton Design Pattern in Ruby? The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby? A: Actually, the above answer was not completely correct. require 'singleton' class Example include Singleton end You also need to include the require 'singleton' statement. A: Use the singleton module: class Clazz include Singleton end See http://www.ruby-doc.org/stdlib/libdoc/singleton/rdoc/index.html for more info. A: You could use modules to the same effect I believe, although its not "the singleton pattern" you can have global state that way (which is what a singleton is ! Naughty global state !).
How do I create a Class using the Singleton Design Pattern in Ruby?
The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby?
[ "Actually, the above answer was not completely correct.\nrequire 'singleton'\n\nclass Example\n include Singleton\nend\n\nYou also need to include the require 'singleton' statement.\n", "Use the singleton module:\nclass Clazz\n include Singleton\nend\n\nSee http://www.ruby-doc.org/stdlib/libdoc/singleton/rdoc/index.html for more info.\n", "You could use modules to the same effect I believe, although its not \"the singleton pattern\" you can have global state that way (which is what a singleton is ! Naughty global state !).\n" ]
[ 10, 8, 0 ]
[]
[]
[ "design_patterns", "ruby", "singleton" ]
stackoverflow_0000004677_design_patterns_ruby_singleton.txt
Q: Debugging: IE6 + SSL + AJAX + post form = 404 error The Setting: The program in question tries to post form data via an AJAX call to a target procedure contained in the same package as the caller. This is done for a site that uses a secure connection (HTTPS). The technology used here is PLSQL and the DOJO JavaScript library. The development tool is basically a text editor. Code Snippet: > function testPost() { >> dojo.xhrPost( { url: ''dr_tm_w_0120.test_post'', form: ''orgForm'', load: testPostXHRCallback, error: testPostXHRError }); } > function testPostXHRCallback(data,ioArgs) { >> alert(''post callback''); try{ dojo.byId("messageDiv").innerHTML = data; } catch(ex){ if(ex.name == "TypeError") { alert("A type error occurred."); } } return data; } > function testPostXHRError(data, ioArgs) { >> alert(data); alert(''Error when retrieving data from the server!''); return data; } The Problem: When using IE6 (which the entire user-base uses), the response sent back from the server is a 404 error. Observations: The program works fine in Firefox. The calling procedure cannot target any procedures within the same package. The calling procedure can target outside sites (both http, https). The other AJAX calls in the package that are not posts of form data work fine. I've searched the internets and consulted with senior-skilled team members and haven't discovered anything that satisfactorily addresses the issue. *Tried Q&A over at Dojo support forums. The Questions: What troubleshooting techniques do you recommend? What troubleshooting tools do you recommend for HTTPS analyzing? Any hypotheses on what the issue might be? Any ideas for workarounds that aren't total (bad) hacks? Ed. The Solution lomaxx, thx for the fiddler tip. you have no idea how awesome it was to get that and use it as a debugging tool. after starting it up this is what i found and how i fixed it (at least in the short term): > ef Fri, 8 Aug 2008 14:01:26 GMT dr_tm_w_0120.test_post: SIGNATURE (parameter names) MISMATCH VARIABLES IN FORM NOT IN PROCEDURE: SO1_DISPLAYED_,PO1_DISPLAYED_,RWA2_DISPLAYED_,DD1_DISPLAYED_ NON-DEFAULT VARIABLES IN PROCEDURE NOT IN FORM: 0 After seeing that message from the server, I kicked around Fiddler a bit more to see what else I could learn from it. Found that there's a WebForms tab that shows the values in the web form. Wouldn't you know it, the "xxx_DISPLAYED_" fields above were in it. I don't really understand yet why these fields exist, because I didn't create them explicitly in the web PLSQL code. But I do understand now that the target procedure has to include them as parameters to work correctly. Again, this is only in the case of IE6 for me, as Firefox worked fine. Well, that the short term answer and hack to fix it. Hopefully, a little more work in this area will lead to a better understanding of the fundamentals going on here. A: First port of call would be to fire up Fiddler and analyze the data going to and from the browser. Take a look at the headers, the url actually being called and the params (if any) being passed to the AJAX method and see if it all looks good before getting to the server. If that all looks ok, is there any way you can verify it's actually hitting the server via logging, or tracing in the AJAX method? ed: another thing I would try is rig up a test page to call the AJAX method on the server using a non-ajax based call and analyze the traffic in fiddler and compare the two.
Debugging: IE6 + SSL + AJAX + post form = 404 error
The Setting: The program in question tries to post form data via an AJAX call to a target procedure contained in the same package as the caller. This is done for a site that uses a secure connection (HTTPS). The technology used here is PLSQL and the DOJO JavaScript library. The development tool is basically a text editor. Code Snippet: > function testPost() { >> dojo.xhrPost( { url: ''dr_tm_w_0120.test_post'', form: ''orgForm'', load: testPostXHRCallback, error: testPostXHRError }); } > function testPostXHRCallback(data,ioArgs) { >> alert(''post callback''); try{ dojo.byId("messageDiv").innerHTML = data; } catch(ex){ if(ex.name == "TypeError") { alert("A type error occurred."); } } return data; } > function testPostXHRError(data, ioArgs) { >> alert(data); alert(''Error when retrieving data from the server!''); return data; } The Problem: When using IE6 (which the entire user-base uses), the response sent back from the server is a 404 error. Observations: The program works fine in Firefox. The calling procedure cannot target any procedures within the same package. The calling procedure can target outside sites (both http, https). The other AJAX calls in the package that are not posts of form data work fine. I've searched the internets and consulted with senior-skilled team members and haven't discovered anything that satisfactorily addresses the issue. *Tried Q&A over at Dojo support forums. The Questions: What troubleshooting techniques do you recommend? What troubleshooting tools do you recommend for HTTPS analyzing? Any hypotheses on what the issue might be? Any ideas for workarounds that aren't total (bad) hacks? Ed. The Solution lomaxx, thx for the fiddler tip. you have no idea how awesome it was to get that and use it as a debugging tool. after starting it up this is what i found and how i fixed it (at least in the short term): > ef Fri, 8 Aug 2008 14:01:26 GMT dr_tm_w_0120.test_post: SIGNATURE (parameter names) MISMATCH VARIABLES IN FORM NOT IN PROCEDURE: SO1_DISPLAYED_,PO1_DISPLAYED_,RWA2_DISPLAYED_,DD1_DISPLAYED_ NON-DEFAULT VARIABLES IN PROCEDURE NOT IN FORM: 0 After seeing that message from the server, I kicked around Fiddler a bit more to see what else I could learn from it. Found that there's a WebForms tab that shows the values in the web form. Wouldn't you know it, the "xxx_DISPLAYED_" fields above were in it. I don't really understand yet why these fields exist, because I didn't create them explicitly in the web PLSQL code. But I do understand now that the target procedure has to include them as parameters to work correctly. Again, this is only in the case of IE6 for me, as Firefox worked fine. Well, that the short term answer and hack to fix it. Hopefully, a little more work in this area will lead to a better understanding of the fundamentals going on here.
[ "First port of call would be to fire up Fiddler and analyze the data going to and from the browser.\nTake a look at the headers, the url actually being called and the params (if any) being passed to the AJAX method and see if it all looks good before getting to the server.\nIf that all looks ok, is there any way you can verify it's actually hitting the server via logging, or tracing in the AJAX method?\ned: another thing I would try is rig up a test page to call the AJAX method on the server using a non-ajax based call and analyze the traffic in fiddler and compare the two.\n" ]
[ 4 ]
[]
[]
[ "ajax", "debugging", "internet_explorer", "internet_explorer_6", "ssl" ]
stackoverflow_0000005619_ajax_debugging_internet_explorer_internet_explorer_6_ssl.txt
Q: HTML comments break down I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a *nix-style command. <!-- command --option value --option2 value2 --option3 --> This comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case? A: Comments in the XML Spec from the w3.org : For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments. A: If you really want to keep the comment in your page you could use this instead of an HTML comment: <div style="display:none">command --option value --option2 value2 --option3 </div> Or even <div class="comment">command --option value --option2 value2 --option3 </div> and specify: .comment {display:none;} in your stylesheet. A: Comments at the top of the page before <html> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears. For more information, check out the "Triggering different rendering modes" on this wikipedia page
HTML comments break down
I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a *nix-style command. <!-- command --option value --option2 value2 --option3 --> This comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case?
[ "Comments in the XML Spec from the w3.org :\n\nFor compatibility, the string \"--\"\n (double-hyphen) MUST NOT occur within\n comments.\n\n", "If you really want to keep the comment in your page you could use this instead of an HTML comment:\n<div style=\"display:none\">command --option value --option2 value2 --option3 </div>\n\nOr even \n<div class=\"comment\">command --option value --option2 value2 --option3 </div>\n\nand specify:\n.comment {display:none;}\n\nin your stylesheet.\n", "Comments at the top of the page before <html> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears.\nFor more information, check out the \"Triggering different rendering modes\" on this wikipedia page\n" ]
[ 27, 2, 1 ]
[]
[]
[ "comments", "html", "sgml", "xml" ]
stackoverflow_0000005425_comments_html_sgml_xml.txt
Q: Tracking state using ASP.NET AJAX / ICallbackEventHandler I have a problem with maintaining state in an ASP.NET AJAX page. Short version: I need some way to update the page ViewState after an async callback has been made, to reflect any state changes the server made during the async call. This seems to be a common problem, but I will describe my scenario to help explain: I have a grid-like control which has some JavaScript enhancements - namely, the ability to drag and drop columns and rows. When a column or row is dropped into a new position, an AJAX method is invoked to notify the control server-side and fire a corresponding server-side event ("OnColumnMoved" or "OnRowMoved"). ASP.NET AJAX calls, by default, send the entire page as the request. That way the page goes through a complete lifecycle, viewstate is persisted and the state of the control is restored before the RaiseCallbackEvent method is invoked. However, since the AJAX call does not update the page, the ViewState reflects the original state of the control, even after the column or row has been moved. So the second time a client-side action occurs, the AJAX request goes to the server and the page & control are built back up again to reflect the first state of the control, not the state after the first column or row was moved. This problem extends to many implications. For example if we have a client-side/AJAX action to add a new item to the grid, and then a row is dragged, the grid is built server-side with one less item than on the client-side. And finally & most seriously for my specific example, the actual data source object we are acting upon is stored in the page ViewState. That was a design decision to allow keeping a stateful copy of the manipulated data which can either be committed to DB after many manipulations or discarded if the user backs out. That is very difficult to change. So, again, I need a way for the page ViewState to be updated on callback after the AJAX method is fired. A: If you're already shuffling the ViewState around anyway, you might as well use an UpdatePanel. Its partial postbacks will update the page's ViewState automatically. A: Check out this blog post: Tweaking the ICallbackEventHandler and Viewstate. The author seems to be addressing the very situation that you are experiencing: So when using ICallbackEventHandler you have two obstacles to overcome to have updated state management for callbacks. First is the problem of the read-only viewstate. The other is actually registering the changes the user has made to the page before triggering the callback. See the blog post for his suggestions on how to solve this. Also check out this forum post which discusses the same problem as well. A: I actually found both of those links you provided, but as noted they are simply describing the problem, not solving it. The author of the blog post suggests a workaround by using a different ViewState provider, but unfortunately that isn't a possibility in this case...I really need to leave the particulars of the ViewState alone and just hook on to what is being done out-of-the-box. A: I found a fairly elegant solution with Telerik's RadAjaxManager. It works quite nicely. Essentially you register each control which might invoke a postback, and then register each control which should be re-drawn after that postback is performed asynchronously. The RadAjaxManager will update the DOM after the async postback and rewrite the ViewState and all affected controls. After taking a peek in Reflector, it looks a little kludgy under the hood, but it suits my purposes. A: I don't understand why you would use a custom control for that, when the built-in ASP.NET AJAX UpdatePanel does the same thing. It just adds more complexity, gives you less support, and makes it more difficult for others to work on your app.
Tracking state using ASP.NET AJAX / ICallbackEventHandler
I have a problem with maintaining state in an ASP.NET AJAX page. Short version: I need some way to update the page ViewState after an async callback has been made, to reflect any state changes the server made during the async call. This seems to be a common problem, but I will describe my scenario to help explain: I have a grid-like control which has some JavaScript enhancements - namely, the ability to drag and drop columns and rows. When a column or row is dropped into a new position, an AJAX method is invoked to notify the control server-side and fire a corresponding server-side event ("OnColumnMoved" or "OnRowMoved"). ASP.NET AJAX calls, by default, send the entire page as the request. That way the page goes through a complete lifecycle, viewstate is persisted and the state of the control is restored before the RaiseCallbackEvent method is invoked. However, since the AJAX call does not update the page, the ViewState reflects the original state of the control, even after the column or row has been moved. So the second time a client-side action occurs, the AJAX request goes to the server and the page & control are built back up again to reflect the first state of the control, not the state after the first column or row was moved. This problem extends to many implications. For example if we have a client-side/AJAX action to add a new item to the grid, and then a row is dragged, the grid is built server-side with one less item than on the client-side. And finally & most seriously for my specific example, the actual data source object we are acting upon is stored in the page ViewState. That was a design decision to allow keeping a stateful copy of the manipulated data which can either be committed to DB after many manipulations or discarded if the user backs out. That is very difficult to change. So, again, I need a way for the page ViewState to be updated on callback after the AJAX method is fired.
[ "If you're already shuffling the ViewState around anyway, you might as well use an UpdatePanel. Its partial postbacks will update the page's ViewState automatically.\n", "Check out this blog post: Tweaking the ICallbackEventHandler and Viewstate. The author seems to be addressing the very situation that you are experiencing: \n\nSo when using ICallbackEventHandler you have two obstacles to overcome to have updated state management for callbacks. First is the problem of the read-only viewstate. The other is actually registering the changes the user has made to the page before triggering the callback.\n\nSee the blog post for his suggestions on how to solve this. Also check out this forum post which discusses the same problem as well.\n", "I actually found both of those links you provided, but as noted they are simply describing the problem, not solving it. The author of the blog post suggests a workaround by using a different ViewState provider, but unfortunately that isn't a possibility in this case...I really need to leave the particulars of the ViewState alone and just hook on to what is being done out-of-the-box.\n", "I found a fairly elegant solution with Telerik's RadAjaxManager. It works quite nicely. Essentially you register each control which might invoke a postback, and then register each control which should be re-drawn after that postback is performed asynchronously. The RadAjaxManager will update the DOM after the async postback and rewrite the ViewState and all affected controls. After taking a peek in Reflector, it looks a little kludgy under the hood, but it suits my purposes.\n", "I don't understand why you would use a custom control for that, when the built-in ASP.NET AJAX UpdatePanel does the same thing.\nIt just adds more complexity, gives you less support, and makes it more difficult for others to work on your app.\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "ajax", "asp.net", "asp.net_ajax", "viewstate" ]
stackoverflow_0000002328_ajax_asp.net_asp.net_ajax_viewstate.txt
Q: MVC Preview 4 - No route in the route table matches the supplied values I have a route that I am calling through a RedirectToRoute like this: return this.RedirectToRoute("Super-SuperRoute", new { year = selectedYear }); I have also tried: return this.RedirectToRoute("Super-SuperRoute", new { controller = "Super", action = "SuperRoute", id = "RouteTopic", year = selectedYear }); The route in the global.asax is like this: routes.MapRoute( "Super-SuperRoute", // Route name "Super.mvc/SuperRoute/{year}", // URL with parameters new { controller = "Super", action = "SuperRoute", id = "RouteTopic" } // Parameter defaults ); So why do I get the error: "No route in the route table matches the supplied values."? I saw that the type of selectedYear was var. When I tried to convert to int with int.Parse I realised that selectedYear was actually null, which would explain the problems. I guess next time I'll pay more attention to the values of the variables at a breakpoint :) A: What type is selectedYear? A DateTime? If so then you might need to convert to a string.
MVC Preview 4 - No route in the route table matches the supplied values
I have a route that I am calling through a RedirectToRoute like this: return this.RedirectToRoute("Super-SuperRoute", new { year = selectedYear }); I have also tried: return this.RedirectToRoute("Super-SuperRoute", new { controller = "Super", action = "SuperRoute", id = "RouteTopic", year = selectedYear }); The route in the global.asax is like this: routes.MapRoute( "Super-SuperRoute", // Route name "Super.mvc/SuperRoute/{year}", // URL with parameters new { controller = "Super", action = "SuperRoute", id = "RouteTopic" } // Parameter defaults ); So why do I get the error: "No route in the route table matches the supplied values."? I saw that the type of selectedYear was var. When I tried to convert to int with int.Parse I realised that selectedYear was actually null, which would explain the problems. I guess next time I'll pay more attention to the values of the variables at a breakpoint :)
[ "What type is selectedYear? A DateTime? If so then you might need to convert to a string.\n" ]
[ 5 ]
[]
[]
[ "asp.net_mvc", "asp.net_mvc_routing" ]
stackoverflow_0000005690_asp.net_mvc_asp.net_mvc_routing.txt
Q: Is there a lightweight, preferable open source, formattable label control for .NET? I have been looking for a way to utilize a simple markup language, or just plain HTML, when displaying text in WinForm applications. I would like to avoid embedding a web browser control since in most of the case I just want to highlight a single word or two in a sentence. I have looked at using a RTFControl but I believe it's a bit heavy and I don't think the "language" used to do the formatting is easy. Is there a simple control that allows me to display strings like: This is a sample string with different formatting. I would be really neat if it was also possible to specify a font and/or size for the text. Oh, .NET 3.5 and WPF/XAML is not an option. A: Well, just use HTML. We have used the following 'FREE' control in some of our applications, and it's just beautiful. We can define the UI in HTML Markup and then render it using this control: http://www.terrainformatica.com/htmlayout/main.whtm Initially, we started looking at HtmlToRTF converters so that we can use an RTF control to render UI, but there is far too many options to match between the two formats. And so, we ended up using the above control. The only pre-condition is a mention of their name in your About Box.
Is there a lightweight, preferable open source, formattable label control for .NET?
I have been looking for a way to utilize a simple markup language, or just plain HTML, when displaying text in WinForm applications. I would like to avoid embedding a web browser control since in most of the case I just want to highlight a single word or two in a sentence. I have looked at using a RTFControl but I believe it's a bit heavy and I don't think the "language" used to do the formatting is easy. Is there a simple control that allows me to display strings like: This is a sample string with different formatting. I would be really neat if it was also possible to specify a font and/or size for the text. Oh, .NET 3.5 and WPF/XAML is not an option.
[ "Well, just use HTML. We have used the following 'FREE' control in some of our applications, and it's just beautiful.\nWe can define the UI in HTML Markup and then render it using this control:\nhttp://www.terrainformatica.com/htmlayout/main.whtm\nInitially, we started looking at HtmlToRTF converters so that we can use an RTF control to render UI, but there is far too many options to match between the two formats. And so, we ended up using the above control.\nThe only pre-condition is a mention of their name in your About Box.\n" ]
[ 7 ]
[]
[]
[ ".net" ]
stackoverflow_0000005704_.net.txt
Q: Is it just me, or are characters being rendered incorrectly more lately? I'm not sure if it's my system, although I haven't done anything unusual with it, but I've started noticing incorrectly rendered characters popping up in web pages, text-files, like this: http://www.kbssource.com/strange-characters.gif I have a hunch it's a related to the fairly recent trend to use unicode for everything, which is a good thing I think, combined with fonts that don't support all possible characters. So, does anyone know what's causing these blips (am I right?), and how do I stop this showing up in my own content? A: It appears that for this particular author, the text was edited in some editor that assumed it wasn't UTF8, and then re-wrote it out in UTF8. I'm basing this off the fact that if I tell my browser to interpret the page as different common encodings, none make it display correctly. This tells me that some conversion was done at some point improperly. The only problem with UTF8 is that there isn't a standardized way to recognize that a file is UTF8, and until all editors are standardizing on UTF8, there will still be conversion errors. For other unicode variants, a Byte Order Mark (BOM) is fairly standard to help identify a file, but BOMs in UTF8 files are pretty rare. To keep it from showing up in your content, make sure you're always using unicode-aware editors, and make sure that you always open your files with the proper encodings. It's a pain, unfortunately, and errors will occasionally crop up. The key is just catching them early so that you can undo it or make a few edits. A: I'm fairly positive it's nothing you can do. I've seen this on the front page of digg alot recently. It more than likely has to do with a character being encoded improperly. Not necessarily a factor of the font, just a mistake made somewhere in translation. A: It looked for a while like the underscore and angle bracket problem had gone away, but it seems it might not be fixed. here's a small sample, which should look like this: #include ____ #include <stdio.h> ____ #include Update: it looks like it's fixed in display mode, and only broken in edit mode
Is it just me, or are characters being rendered incorrectly more lately?
I'm not sure if it's my system, although I haven't done anything unusual with it, but I've started noticing incorrectly rendered characters popping up in web pages, text-files, like this: http://www.kbssource.com/strange-characters.gif I have a hunch it's a related to the fairly recent trend to use unicode for everything, which is a good thing I think, combined with fonts that don't support all possible characters. So, does anyone know what's causing these blips (am I right?), and how do I stop this showing up in my own content?
[ "It appears that for this particular author, the text was edited in some editor that assumed it wasn't UTF8, and then re-wrote it out in UTF8. I'm basing this off the fact that if I tell my browser to interpret the page as different common encodings, none make it display correctly. This tells me that some conversion was done at some point improperly.\nThe only problem with UTF8 is that there isn't a standardized way to recognize that a file is UTF8, and until all editors are standardizing on UTF8, there will still be conversion errors. For other unicode variants, a Byte Order Mark (BOM) is fairly standard to help identify a file, but BOMs in UTF8 files are pretty rare.\nTo keep it from showing up in your content, make sure you're always using unicode-aware editors, and make sure that you always open your files with the proper encodings. It's a pain, unfortunately, and errors will occasionally crop up. The key is just catching them early so that you can undo it or make a few edits.\n", "I'm fairly positive it's nothing you can do. I've seen this on the front page of digg alot recently. It more than likely has to do with a character being encoded improperly. Not necessarily a factor of the font, just a mistake made somewhere in translation.\n", "It looked for a while like the underscore and angle bracket problem had gone away, but it seems it might not be fixed.\nhere's a small sample, which should look like this:\n\n#include \n____\n#include <stdio.h>\n\n\n____\n#include \n\nUpdate: it looks like it's fixed in display mode, and only broken in edit mode\n" ]
[ 2, 0, 0 ]
[]
[]
[ "fonts", "unicode", "utf_8" ]
stackoverflow_0000005682_fonts_unicode_utf_8.txt
Q: mailto link for large bodies I have a page upon which a user can choose up to many different paragraphs. When the link is clicked (or button), an email will open up and put all those paragraphs into the body of the email, address it, and fill in the subject. However, the text can be too long for a mailto link. Any way around this? We were thinking about having an SP from the SQL Server do it but the user needs a nice way of 'seeing' the email before they blast 50 executive level employees with items that shouldn't be sent...and of course there's the whole thing about doing IT for IT rather than doing software programming. 80( When you build stuff for IT, it doesn't (some say shouldn't) have to be pretty just functional. In other words, this isn't the dogfood we wake it's just the dog food we have to eat. We started talking about it and decided that the 'mail form' would give us exactly what we are looking for. A very different look to let the user know that the gun is loaded and aimed. The ability to change/add text to the email. Send a copy to themselves or not. Can be coded quickly. A: By putting the data into a form, I was able to make the body around 1800 characters long before the form stopped working. The code looked like this: <form action="mailto:[email protected]"> <input type="hidden" name="Subject" value="Email subject"> <input type="hidden" name="Body" value="Email body"> <input type="submit"> </form> Edit: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations. A: Does the e-mail content need to be in the e-mail? Could you store the large content somewhere centrally (file-share/FTP site) then just send a link to the content? This makes the recipient have an extra step, but you have a consistent e-mail size, so won't run into reliability problems due to unexpectedly large or excessive content.
mailto link for large bodies
I have a page upon which a user can choose up to many different paragraphs. When the link is clicked (or button), an email will open up and put all those paragraphs into the body of the email, address it, and fill in the subject. However, the text can be too long for a mailto link. Any way around this? We were thinking about having an SP from the SQL Server do it but the user needs a nice way of 'seeing' the email before they blast 50 executive level employees with items that shouldn't be sent...and of course there's the whole thing about doing IT for IT rather than doing software programming. 80( When you build stuff for IT, it doesn't (some say shouldn't) have to be pretty just functional. In other words, this isn't the dogfood we wake it's just the dog food we have to eat. We started talking about it and decided that the 'mail form' would give us exactly what we are looking for. A very different look to let the user know that the gun is loaded and aimed. The ability to change/add text to the email. Send a copy to themselves or not. Can be coded quickly.
[ "By putting the data into a form, I was able to make the body around 1800 characters long before the form stopped working.\nThe code looked like this:\n<form action=\"mailto:[email protected]\">\n <input type=\"hidden\" name=\"Subject\" value=\"Email subject\">\n <input type=\"hidden\" name=\"Body\" value=\"Email body\">\n <input type=\"submit\">\n</form>\n\n\nEdit: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations.\n", "Does the e-mail content need to be in the e-mail? Could you store the large content somewhere centrally (file-share/FTP site) then just send a link to the content?\nThis makes the recipient have an extra step, but you have a consistent e-mail size, so won't run into reliability problems due to unexpectedly large or excessive content.\n" ]
[ 15, 0 ]
[]
[]
[ "mailto" ]
stackoverflow_0000005857_mailto.txt
Q: How do I know which SQL Server 2005 index recommendations to implement, if any? We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard (http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject. A: First thing to be aware of: When you upgrade from 2000 to 2005 (by using detach and attach) make sure that you: Set compability to 90 Rebuild the indexes Run update statistics with full scan If you don't do this you will get suboptimal plans. IF the table is mostly write you want as few indexes as possible IF the table is used for a lot of read queries you have to make sure that the WHERE clause is covered by indexes. A: The advice you got is right. Try them all, one by one. There is NO substitute for testing when it comes to performance. Unless you prove it, you haven't done anything. A: Your best researching the most common type of queries that happen on your database and creating indexes based on that research. For example, if there is a table which stores website hits, which is written to very very often but hardly even read from. Then don't index the table in away. If how ever you have a list of users which is access more often than is written to, then I would firstly create a clustered index on the column that is access the most, usually the primary key. I would then create an index on commonly search columns, and those which are use in order by clauses.
How do I know which SQL Server 2005 index recommendations to implement, if any?
We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard (http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject.
[ "First thing to be aware of:\nWhen you upgrade from 2000 to 2005 (by using detach and attach) make sure that you:\n\nSet compability to 90\nRebuild the indexes\nRun update statistics with full scan\n\nIf you don't do this you will get suboptimal plans.\nIF the table is mostly write you want as few indexes as possible\nIF the table is used for a lot of read queries you have to make sure that the WHERE clause is covered by indexes.\n", "The advice you got is right. Try them all, one by one. \nThere is NO substitute for testing when it comes to performance. Unless you prove it, you haven't done anything.\n", "Your best researching the most common type of queries that happen on your database and creating indexes based on that research.\nFor example, if there is a table which stores website hits, which is written to very very often but hardly even read from. Then don't index the table in away.\nIf how ever you have a list of users which is access more often than is written to, then I would firstly create a clustered index on the column that is access the most, usually the primary key. I would then create an index on commonly search columns, and those which are use in order by clauses.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "sql_server", "sql_server_2005" ]
stackoverflow_0000003975_sql_server_sql_server_2005.txt
Q: User access log to SQL Server I need to get a log of user access to our SQL Server so I can track average and peak concurrency usage. Is there a hidden table or something I'm missing that has this information for me? To my knowledge the application I'm looking at does not track this at the application level. I'm currently working on SQL Server 2000, but will moving to SQL Server 2005 shortly, so solutions for both are greatly appreciated. A: In SQL Server 2005, go to tree view on the left and select Server (name of the actual server) > Management > Activity Monitor. Hope this helps. A: on 2000 you can use sp_who2 or the dbo.sysprocesses system table on 2005 take a look at the sys.dm_exec_sessions DMV Below is an example SELECT COUNT(*) AS StatusCount,CASE status WHEN 'Running' THEN 'Running - Currently running one or more requests' WHEN 'Sleeping ' THEN 'Sleeping - Currently running no requests' ELSE 'Dormant – Session is in prelogin state' END status FROM sys.dm_exec_sessions GROUP BY status
User access log to SQL Server
I need to get a log of user access to our SQL Server so I can track average and peak concurrency usage. Is there a hidden table or something I'm missing that has this information for me? To my knowledge the application I'm looking at does not track this at the application level. I'm currently working on SQL Server 2000, but will moving to SQL Server 2005 shortly, so solutions for both are greatly appreciated.
[ "In SQL Server 2005, go to tree view on the left and select Server (name of the actual server) > Management > Activity Monitor. Hope this helps.\n", "\non 2000 you can use sp_who2 or the dbo.sysprocesses system table\non 2005 take a look at the sys.dm_exec_sessions DMV\n\nBelow is an example\nSELECT COUNT(*) AS StatusCount,CASE status \nWHEN 'Running' THEN 'Running - Currently running one or more requests' \nWHEN 'Sleeping ' THEN 'Sleeping - Currently running no requests' \nELSE 'Dormant – Session is in prelogin state' END status \nFROM sys.dm_exec_sessions \nGROUP BY status\n\n" ]
[ 7, 6 ]
[]
[]
[ "logging", "sql_server", "statistics" ]
stackoverflow_0000005908_logging_sql_server_statistics.txt
Q: API Yahoo India Maps Yahoo has separate map for India ( which has more details than the regular maps.yahoo.com) at http://in.maps.yahoo.com/ . But when I use the API it goes to default map. How do I get API access to YMaps India? A: I don't know about yahoo, but there is another mapping website that provides an API for India. http://biz.mapmyindia.com/APIs.html
API Yahoo India Maps
Yahoo has separate map for India ( which has more details than the regular maps.yahoo.com) at http://in.maps.yahoo.com/ . But when I use the API it goes to default map. How do I get API access to YMaps India?
[ "I don't know about yahoo, but there is another mapping website that provides an API for India.\nhttp://biz.mapmyindia.com/APIs.html\n" ]
[ 1 ]
[]
[]
[ "yahoo_api", "yahoo_maps" ]
stackoverflow_0000005918_yahoo_api_yahoo_maps.txt
Q: Should I provide accessor methods / Getter Setters for public/protected components on a form? If I have .Net Form with a component/object such as a textbox that I need to access from a parent or other form I obviously need to "upgrade" the modifier to this component to an Internal or Public level variable. Now, if I were providing a public variable of an int or string type etc. in my form class I wouldn't think twice about using Getters and (maybe) Setters around this, even if they didn't do anything other than provide direct access to the variable. However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice). So, the question is; In order to do the "right thing" should I wrap such VS designer components or objects in a Getter and/or Setter? A: "However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice)." If you mean the controls you're dragging and dropping onto the form, these are marked as private instance members and are added to the form's Controls collection. Why would they be otherwise? A form could have forty or fifty controls, it'd be somewhat unnecessary and unwieldy to provide a getter/setter for every control on the form. The designer leaves it up to you to provide delegated access to specific controls via public getter/setters. The designer does the right thing here. A: The reason for not implementing Getters and Setters for components on a form I believe is cause they wouldn't be "Thread Safe" .NET objects are suppose to be only modified by the form thread that created them, If you put on getter and setters you are potentially opening it up for any thread. Instead your suppose to implement a delegate system where changes to these objects are delegated to the thread that created them and ran there. A: This is a classic example of encapsulation in object-oriented design. A Form is an object whose responsibility is to present UI to the user and accept input. The interface between the Form object and other areas of the code should be a data-oriented interface, not an interface which exposes the inner implementation details of the Form. The inner workings of the Form (ie, the controls) should remain hidden from any consuming code. A mature solution would probably involve the following design points: Public methods or properties are behavior (show, hide, position) or data-oriented (set data, get data, update data). All event handlers implemented by the Form are wrapped in appropriate thread delegation code to enforce Form thread-execution rules. Controls themselves would be data-bound to the underlying data structure (where appropriate) to reduce code. And that's not even mentioning meta-development things like unit tests. A: I always do that, and if you ARE following an MVP design creating getter/setters for your view components would be a design requirement. I do not understand what you mean by "does not comply with good programming practice". Microsoft violates a lot of good programming practices to make it easier to create stuff on Visual Studio (for the sake of rapid app development) and I do not see the lack of getters/setters for controls as evidence of violating any such best practices.
Should I provide accessor methods / Getter Setters for public/protected components on a form?
If I have .Net Form with a component/object such as a textbox that I need to access from a parent or other form I obviously need to "upgrade" the modifier to this component to an Internal or Public level variable. Now, if I were providing a public variable of an int or string type etc. in my form class I wouldn't think twice about using Getters and (maybe) Setters around this, even if they didn't do anything other than provide direct access to the variable. However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice). So, the question is; In order to do the "right thing" should I wrap such VS designer components or objects in a Getter and/or Setter?
[ "\"However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice).\"\nIf you mean the controls you're dragging and dropping onto the form, these are marked as private instance members and are added to the form's Controls collection. Why would they be otherwise? A form could have forty or fifty controls, it'd be somewhat unnecessary and unwieldy to provide a getter/setter for every control on the form. The designer leaves it up to you to provide delegated access to specific controls via public getter/setters.\nThe designer does the right thing here.\n", "The reason for not implementing Getters and Setters for components on a form I believe is cause they wouldn't be \"Thread Safe\" .NET objects are suppose to be only modified by the form thread that created them, If you put on getter and setters you are potentially opening it up for any thread. Instead your suppose to implement a delegate system where changes to these objects are delegated to the thread that created them and ran there.\n", "This is a classic example of encapsulation in object-oriented design.\nA Form is an object whose responsibility is to present UI to the user and accept input. The interface between the Form object and other areas of the code should be a data-oriented interface, not an interface which exposes the inner implementation details of the Form. The inner workings of the Form (ie, the controls) should remain hidden from any consuming code.\nA mature solution would probably involve the following design points:\n\nPublic methods or properties are behavior (show, hide, position) or data-oriented (set data, get data, update data).\nAll event handlers implemented by the Form are wrapped in appropriate thread delegation code to enforce Form thread-execution rules.\nControls themselves would be data-bound to the underlying data structure (where appropriate) to reduce code.\n\nAnd that's not even mentioning meta-development things like unit tests.\n", "I always do that, and if you ARE following an MVP design creating getter/setters for your view components would be a design requirement.\nI do not understand what you mean by \"does not comply with good programming practice\". Microsoft violates a lot of good programming practices to make it easier to create stuff on Visual Studio (for the sake of rapid app development) and I do not see the lack of getters/setters for controls as evidence of violating any such best practices.\n" ]
[ 5, 2, 2, 1 ]
[]
[]
[ ".net", "winforms" ]
stackoverflow_0000005997_.net_winforms.txt
Q: Any solution to Illegal Cross Thread Operation exception? When you data bind in C#, the thread that changes the data causes the control to change too. But if this thread is not the one on which the control was created, you'll get an Illegal Cross Thread Operation exception. Is there anyway to prevent this? A: You should be able to do something like: if (control.InvokeRequired) { control.Invoke(delegateWithMyCode); } else { delegateWithMyCode(); } InvokeRequired is a property on Controls to see if you are on the correct thread, then Invoke will invoke the delegate on the correct thread. UPDATE: Actually, at my last job we did something like this: private void SomeEventHandler(Object someParam) { if (this.InvokeRequired) { this.Invoke(new SomeEventHandlerDelegate(SomeEventHandler), someParam); } // Regular handling code } which removes the need for the else block and kind of tightens up the code. A: As I don't have a test case to go from I can't guarantee this solution, but it seems to me that a scenario similar to the one used to update progress bars in different threads (use a delegate) would be suitable here. public delegate void DataBindDelegate(); public DataBindDelegate BindData = new DataBindDelegate(DoDataBind); public void DoDataBind() { DataBind(); } If the data binding needs to be done by a particular thread, then let that thread do the work! A: If the thread call is "illegal" (i.e. the DataBind call affects controls that were not created in the thread it is being called from) then you need to create a delegate so that even if the decision / preparation for the DataBind is not done in the control-creating thread, any resultant modification of them (i.e. DataBind()) will be. You would call my code from the worker thread like so: this.BindData.Invoke(); This would then cause the original thread to do the binding, which (presuming it is the thread that created the controls) should work. A: In WPF and Silverlight the binding infrastructure takes care of the switching to the UI thread.
Any solution to Illegal Cross Thread Operation exception?
When you data bind in C#, the thread that changes the data causes the control to change too. But if this thread is not the one on which the control was created, you'll get an Illegal Cross Thread Operation exception. Is there anyway to prevent this?
[ "You should be able to do something like:\nif (control.InvokeRequired)\n{\n control.Invoke(delegateWithMyCode);\n}\nelse\n{\n delegateWithMyCode();\n}\n\nInvokeRequired is a property on Controls to see if you are on the correct thread, then Invoke will invoke the delegate on the correct thread.\nUPDATE: Actually, at my last job we did something like this:\nprivate void SomeEventHandler(Object someParam)\n{\n if (this.InvokeRequired)\n {\n this.Invoke(new SomeEventHandlerDelegate(SomeEventHandler), someParam);\n }\n\n // Regular handling code\n}\n\nwhich removes the need for the else block and kind of tightens up the code.\n", "As I don't have a test case to go from I can't guarantee this solution, but it seems to me that a scenario similar to the one used to update progress bars in different threads (use a delegate) would be suitable here.\npublic delegate void DataBindDelegate();\npublic DataBindDelegate BindData = new DataBindDelegate(DoDataBind);\n\npublic void DoDataBind()\n{\n DataBind();\n}\n\nIf the data binding needs to be done by a particular thread, then let that thread do the work!\n", "If the thread call is \"illegal\" (i.e. the DataBind call affects controls that were not created in the thread it is being called from) then you need to create a delegate so that even if the decision / preparation for the DataBind is not done in the control-creating thread, any resultant modification of them (i.e. DataBind()) will be.\nYou would call my code from the worker thread like so:\nthis.BindData.Invoke();\n\nThis would then cause the original thread to do the binding, which (presuming it is the thread that created the controls) should work.\n", "In WPF and Silverlight the binding infrastructure takes care of the switching to the UI thread.\n" ]
[ 5, 2, 0, 0 ]
[]
[]
[ "c#", "data_binding" ]
stackoverflow_0000001994_c#_data_binding.txt
Q: Memcached chunk limit Why is there a hardcoded chunk limit (.5 meg after compression) in memcached? Has anyone recompiled theirs to up it? I know I should not be sending big chunks like that around, but these extra heavy chunks happen for me from time to time and wreak havoc. A: This question used to be in the official FAQ What are some limits in memcached I might hit? (Wayback Machine) To quote: The simple limits you will probably see with memcache are the key and item size limits. Keys are restricted to 250 characters. Stored data cannot exceed 1 megabyte in size, since that is the largest typical slab size." The FAQ has now been revised and there are now two separate questions covering this: What is the maxiumum key length? (250 bytes) The maximum size of a key is 250 characters. Note this value will be less if you are using client "prefixes" or similar features, since the prefix is tacked onto the front of the original key. Shorter keys are generally better since they save memory and use less bandwidth. Why are items limited to 1 megabyte in size? Ahh, this is a popular question! Short answer: Because of how the memory allocator's algorithm works. Long answer: Memcached's memory storage engine (which will be pluggable/adjusted in the future...), uses a slabs approach to memory management. Memory is broken up into slabs chunks of varying sizes, starting at a minimum number and ascending by a factorial up to the largest possible value. Say the minimum value is 400 bytes, and the maximum value is 1 megabyte, and the factorial is 1.20: slab 1 - 400 bytes slab 2 - 480 bytes slab 3 - 576 bytes ... etc. The larger the slab, the more of a gap there is between it and the previous slab. So the larger the maximum value the less efficient the memory storage is. Memcached also has to pre-allocate some memory for every slab that exists, so setting a smaller factorial with a larger max value will require even more overhead. There're other reason why you wouldn't want to do that... If we're talking about a web page and you're attempting to store/load values that large, you're probably doing something wrong. At that size it'll take a noticeable amount of time to load and unpack the data structure into memory, and your site will likely not perform very well. If you really do want to store items larger than 1MB, you can recompile memcached with an edited slabs.c:POWER_BLOCK value, or use the inefficient malloc/free backend. Other suggestions include a database, MogileFS, etc.
Memcached chunk limit
Why is there a hardcoded chunk limit (.5 meg after compression) in memcached? Has anyone recompiled theirs to up it? I know I should not be sending big chunks like that around, but these extra heavy chunks happen for me from time to time and wreak havoc.
[ "This question used to be in the official FAQ\nWhat are some limits in memcached I might hit? (Wayback Machine)\nTo quote:\n\nThe simple limits you will probably see with memcache are the key and \n item size limits. Keys are restricted to 250 characters. Stored data \n cannot exceed 1 megabyte in size, since that is the largest typical \n slab size.\"\n\nThe FAQ has now been revised and there are now two separate questions covering this:\nWhat is the maxiumum key length? (250 bytes)\n\nThe maximum size of a key is 250 characters. Note this value will be\n less if you are using client \"prefixes\" or similar features, since the\n prefix is tacked onto the front of the original key. Shorter keys are\n generally better since they save memory and use less bandwidth.\n\nWhy are items limited to 1 megabyte in size?\n\nAhh, this is a popular question!\nShort answer: Because of how the memory allocator's algorithm works.\nLong answer: Memcached's memory storage engine (which will be\n pluggable/adjusted in the future...), uses a slabs approach to memory\n management. Memory is broken up into slabs chunks of varying sizes,\n starting at a minimum number and ascending by a factorial up to the\n largest possible value.\nSay the minimum value is 400 bytes, and the maximum value is 1\n megabyte, and the factorial is 1.20:\nslab 1 - 400 bytes slab 2 - 480 bytes slab 3 - 576 bytes ... etc.\nThe larger the slab, the more of a gap there is between it and the\n previous slab. So the larger the maximum value the less efficient the\n memory storage is. Memcached also has to pre-allocate some memory for\n every slab that exists, so setting a smaller factorial with a larger\n max value will require even more overhead.\nThere're other reason why you wouldn't want to do that... If we're\n talking about a web page and you're attempting to store/load values\n that large, you're probably doing something wrong. At that size it'll\n take a noticeable amount of time to load and unpack the data structure\n into memory, and your site will likely not perform very well.\nIf you really do want to store items larger than 1MB, you can\n recompile memcached with an edited slabs.c:POWER_BLOCK value, or use\n the inefficient malloc/free backend. Other suggestions include a\n database, MogileFS, etc.\n\n" ]
[ 4 ]
[]
[]
[ "membership" ]
stackoverflow_0000005349_membership.txt
Q: Making a production build of a PHP project with Subversion If you are working in PHP (or I guess any programming language) and using subversion as your source control, is there a way to take your project (for example): C:\Projects\test\.svn C:\Projects\test\docs\ C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php C:\Projects\test\test.php and build/copy/whatever it so it weeds out certain files and becomes: C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php automatically? I'm getting tired of making a branch, and then going through the branch and deleting all of the ".svn" folders, the docs directory, and my prototyping files. I know I could probably use a .bat file to only copy the specific files I want, but I was hoping there was some way with subversion to sort of pseudo ignore a file, to where it will still version it, but where you could make a snapshot of the project that ignores the files you told it to pseudo ignore. I know I read online somewhere about some functionality that at least lets you copy without the .svn folders, but I can't find it now. A: If you use TortoiseSVN, you can use the export feature to automatically strip out all of the .svn files. I think other svn things have the same feature. Right click the root project folder, then select TortoiseSVN > Export, and tell it where you want the .svn free directory. A: Copy all the files manually or using your existing method for the first time. Then, since I take it you're on a Windows platform, install SyncToy and configure it in the subscribe method, which would effectively one-way copy only the changes made since the last pseudo-commit to production for files already in production. If you want to add a file you can just copy it manually and resume the SyncToy operation. A: Ok, so my final solution is this: Use the export command to export to a folder called "export" in the same directory as a file called "deploy.bat", then I run the deploy script (v1 stands for version 1, which is what version I am currently on in this project) This script utilizes 7-Zip, which I have placed on my system path so I can use it as a command line utility: rem replace the v1 directory with the export directory rd /s /q v1 move /y export\newIMS v1 rd /s /q export rem remove the prepDocs directory from the project rd /s /q v1\prepDocs rem remove the scripts directory from the project rd /s /q v1\scripts rem remove individual files from project del v1\.project rem del v1\inc\testLoad.html rem del v1\inc\testInc.js SET /P version=Please enter version number: rem zip the file up with 7-Zip and name it after whatever version number the user typed in. 7z a -r v%version%.zip v1 rem copy everything to the shared space ready for deployment xcopy v%version%.zip /s /q /y /i "Z:\IT\IT Security\IT Projects\IMS\v%version%.zip" xcopy v1 /s /q /y /i "Z:\IT\IT Security\IT Projects\IMS\currentVersion" rem keep the window open until user presses any key PAUSE I didn't have time to check out the SyncToy solution, so don't take this as me rejecting that method. I just knew how to do this, and didn't have time to check that one out (under a time crunch right now). Sources: http://commandwindows.com/command2.htm http://www.ss64.com/nt/
Making a production build of a PHP project with Subversion
If you are working in PHP (or I guess any programming language) and using subversion as your source control, is there a way to take your project (for example): C:\Projects\test\.svn C:\Projects\test\docs\ C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php C:\Projects\test\test.php and build/copy/whatever it so it weeds out certain files and becomes: C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php automatically? I'm getting tired of making a branch, and then going through the branch and deleting all of the ".svn" folders, the docs directory, and my prototyping files. I know I could probably use a .bat file to only copy the specific files I want, but I was hoping there was some way with subversion to sort of pseudo ignore a file, to where it will still version it, but where you could make a snapshot of the project that ignores the files you told it to pseudo ignore. I know I read online somewhere about some functionality that at least lets you copy without the .svn folders, but I can't find it now.
[ "If you use TortoiseSVN, you can use the export feature to automatically strip out all of the .svn files. I think other svn things have the same feature.\nRight click the root project folder, then select TortoiseSVN > Export, and tell it where you want the .svn free directory.\n", "Copy all the files manually or using your existing method for the first time. Then, since I take it you're on a Windows platform, install SyncToy and configure it in the subscribe method, which would effectively one-way copy only the changes made since the last pseudo-commit to production for files already in production. If you want to add a file you can just copy it manually and resume the SyncToy operation.\n", "Ok, so my final solution is this:\nUse the export command to export to a folder called \"export\" in the same directory as a file called \"deploy.bat\", then I run the deploy script (v1 stands for version 1, which is what version I am currently on in this project) This script utilizes 7-Zip, which I have placed on my system path so I can use it as a command line utility:\nrem replace the v1 directory with the export directory\nrd /s /q v1\nmove /y export\\newIMS v1\nrd /s /q export\n\nrem remove the prepDocs directory from the project\nrd /s /q v1\\prepDocs\n\nrem remove the scripts directory from the project\nrd /s /q v1\\scripts\n\nrem remove individual files from project\ndel v1\\.project\nrem del v1\\inc\\testLoad.html\nrem del v1\\inc\\testInc.js\n\nSET /P version=Please enter version number:\n\nrem zip the file up with 7-Zip and name it after whatever version number the user typed in.\n7z a -r v%version%.zip v1\n\nrem copy everything to the shared space ready for deployment\nxcopy v%version%.zip /s /q /y /i \"Z:\\IT\\IT Security\\IT Projects\\IMS\\v%version%.zip\"\nxcopy v1 /s /q /y /i \"Z:\\IT\\IT Security\\IT Projects\\IMS\\currentVersion\"\n\nrem keep the window open until user presses any key\nPAUSE\n\nI didn't have time to check out the SyncToy solution, so don't take this as me rejecting that method. I just knew how to do this, and didn't have time to check that one out (under a time crunch right now).\nSources:\nhttp://commandwindows.com/command2.htm\nhttp://www.ss64.com/nt/\n" ]
[ 6, 2, 1 ]
[]
[]
[ "build_process", "php", "scripting", "svn", "tortoisesvn" ]
stackoverflow_0000005872_build_process_php_scripting_svn_tortoisesvn.txt
Q: Multiple foreign keys? I've got a table that is supposed to track days and costs for shipping product from one vendor to another. We (brilliantly :p) stored both the shipping vendors (FedEx, UPS) with the product handling vendors (Think... Dunder Mifflin) in a "VENDOR" table. So, I have three columns in my SHIPPING_DETAILS table that all reference VENDOR.no. For some reason MySQL isn't letting me define all three as foreign keys. Any ideas? CREATE TABLE SHIPPING_GRID( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', INDEX (shipping_vendor_no), INDEX (start_vendor_no), INDEX (end_vendor_no), FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) ) TYPE = INNODB; Edited to remove double primary key definition... Yeah, unfortunately that didn't fix it though. Now I'm getting: Can't create table './REMOVED MY DB NAME/SHIPPING_GRID.frm' (errno: 150) Doing a phpinfo() tells me this for mysql: Client API version 5.0.45 Yes, the VENDOR.no is type int(6). A: You defined the primary key twice. Try: CREATE TABLE SHIPPING_GRID( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', INDEX (shipping_vendor_no), INDEX (start_vendor_no), INDEX (end_vendor_no), FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) ) TYPE = INNODB; The VENDOR primary key must be INT(6), and both tables must be of type InnoDB. A: Can you provide the definition of the VENDOR table I figured it out. The VENDOR table was MyISAM... (edited your answer to tell me to make them both INNODB ;) ) (any reason not to just switch the VENDOR type over to INNODB?) A: I ran the code here, and the error message showed (and it is right!) that you are setting id field twice as primary key.
Multiple foreign keys?
I've got a table that is supposed to track days and costs for shipping product from one vendor to another. We (brilliantly :p) stored both the shipping vendors (FedEx, UPS) with the product handling vendors (Think... Dunder Mifflin) in a "VENDOR" table. So, I have three columns in my SHIPPING_DETAILS table that all reference VENDOR.no. For some reason MySQL isn't letting me define all three as foreign keys. Any ideas? CREATE TABLE SHIPPING_GRID( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', INDEX (shipping_vendor_no), INDEX (start_vendor_no), INDEX (end_vendor_no), FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) ) TYPE = INNODB; Edited to remove double primary key definition... Yeah, unfortunately that didn't fix it though. Now I'm getting: Can't create table './REMOVED MY DB NAME/SHIPPING_GRID.frm' (errno: 150) Doing a phpinfo() tells me this for mysql: Client API version 5.0.45 Yes, the VENDOR.no is type int(6).
[ "You defined the primary key twice. Try:\nCREATE TABLE SHIPPING_GRID( \n id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', \n shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', \n start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', \n end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', \n shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', \n price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', \n is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', \n INDEX (shipping_vendor_no), \n INDEX (start_vendor_no), \n INDEX (end_vendor_no), \n FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), \n FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), \n FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) \n) TYPE = INNODB;\n\nThe VENDOR primary key must be INT(6), and both tables must be of type InnoDB.\n", "\nCan you provide the definition of the\n VENDOR table\n\nI figured it out. The VENDOR table was MyISAM... (edited your answer to tell me to make them both INNODB ;) )\n(any reason not to just switch the VENDOR type over to INNODB?)\n", "I ran the code here, and the error message showed (and it is right!) that you are setting id field twice as primary key.\n" ]
[ 13, 1, 0 ]
[]
[]
[ "foreign_keys", "mysql", "sql" ]
stackoverflow_0000006340_foreign_keys_mysql_sql.txt
Q: Normalizing a Table with Low Integrity I've been handed a table with about 18000 rows. Each record describes the location of one customer. The issue is, that when the person created the table, they did not add a field for "Company Name", only "Location Name," and one company can have many locations. For example, here are some records that describe the same customer: Location Table ID Location_Name 1 TownShop#1 2 Town Shop - Loc 2 3 The Town Shop 4 TTS - Someplace 5 Town Shop,the 3 6 Toen Shop4 My goal is to make it look like: Location Table ID Company_ID Location_Name 1 1 Town Shop#1 2 1 Town Shop - Loc 2 3 1 The Town Shop 4 1 TTS - Someplace 5 1 Town Shop,the 3 6 1 Toen Shop4 Company Table Company_ID Company_Name 1 The Town Shop There is no "Company" table, I will have to generate the Company Name list from the most descriptive or best Location Name that represents the multiple locations. Currently I am thinking I need to generate a list of Location Names that are similar, and then and go through that list by hand. Any suggestions on how I can approach this is appreciated. @Neall, Thank you for your statement, but unfortunately, each location name is distinct, there are no duplicate location names, only similar. So in the results from your statement "repcount" is 1 in each row. @yukondude, Your step 4 is the heart of my question. A: I've had to do this before. The only real way to do it is to manually match up the various locations. Use your database's console interface and grouping select statements. First, add your "Company Name" field. Then: SELECT count(*) AS repcount, "Location Name" FROM mytable WHERE "Company Name" IS NULL GROUP BY "Location Name" ORDER BY repcount DESC LIMIT 5; Figure out what company the location at the top of the list belongs to and then update your company name field with an UPDATE ... WHERE "Location Name" = "The Location" statement. P.S. - You should really break your company names and location names out into separate tables and refer to them by their primary keys. Update: - Wow - no duplicates? How many records do you have? A: Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames. Update There is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations. Okay... try this: Build a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use regular expressions for this. Store this list in a separate table. Sort that list alphabetically and (manually) determine which entries should be CompanyNames. Compare each CompanyName to each LocationName and come up with a match score (use Levenshtein or some other string matching algo). Store the result in a separate table. Set a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName. Manually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful. The whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand. A: I was going to recommend some complicated token matching algorithm but it's really tricky to get right and if you're data does not have a lot of correlation (typos, etc) then it's not going to give very good results. I would recommend you submit a job to the Amazon Mechanical Turk and let a human sort it out. A: Ideally, you'd probably want a separate table named Company and then a company_id column in this "Location" table that is a foreign key to the Company table's primary key, likely called id. That would avoid a fair bit of text duplication in this table (over 18,000 rows, an integer foreign key would save quite a bit of space over a varchar column). But you're still faced with a method for loading that Company table and then properly associating it with the rows in Location. There's no general solution, but you could do something along these lines: Create the Company table, with an id column that auto-increments (depends on your RDBMS). Find all of the unique company names and insert them into Company. Add a column, company_id, to Location that accepts NULLs (for now) and that is a foreign key of the Company.id column. For each row in Location, determine the corresponding company, and UPDATE that row's company_id column with that company's id. This is likely the most challenging step. If your data is like what you show in the example, you'll likely have to take many runs at this with various string matching approaches. Once all rows in Location have a company_id value, then you can ALTER the Company table to add a NOT NULL constraint to the company_id column (assuming that every location must have a company, which seems reasonable). If you can make a copy of your Location table, you can gradually build up a series of SQL statements to populate the company_id foreign key. If you make a mistake, you can just start over and rerun the script up to the point of failure. A: Yes, that step 4 from my previous post is a doozy. No matter what, you're probably going to have to do some of this by hand, but you may be able to automate the bulk of it. For the example locations you gave, a query like the following would set the appropriate company_id value: UPDATE Location SET Company_ID = 1 WHERE (LOWER(Location_Name) LIKE '%to_n shop%' OR LOWER(Location_Name) LIKE '%tts%') AND Company_ID IS NULL; I believe that would match your examples (I added the IS NULL part to not overwrite previously set Company_ID values), but of course in 18,000 rows you're going to have to be pretty inventive to handle the various combinations. Something else that might help would be to use the names in Company to generate queries like the one above. You could do something like the following (in MySQL): SELECT CONCAT('UPDATE Location SET Company_ID = ', Company_ID, ' WHERE LOWER(Location_Name) LIKE ', LOWER(REPLACE(Company_Name), ' ', '%'), ' AND Company_ID IS NULL;') FROM Company; Then just run the statements that it produces. That could do a lot of the grunge work for you.
Normalizing a Table with Low Integrity
I've been handed a table with about 18000 rows. Each record describes the location of one customer. The issue is, that when the person created the table, they did not add a field for "Company Name", only "Location Name," and one company can have many locations. For example, here are some records that describe the same customer: Location Table ID Location_Name 1 TownShop#1 2 Town Shop - Loc 2 3 The Town Shop 4 TTS - Someplace 5 Town Shop,the 3 6 Toen Shop4 My goal is to make it look like: Location Table ID Company_ID Location_Name 1 1 Town Shop#1 2 1 Town Shop - Loc 2 3 1 The Town Shop 4 1 TTS - Someplace 5 1 Town Shop,the 3 6 1 Toen Shop4 Company Table Company_ID Company_Name 1 The Town Shop There is no "Company" table, I will have to generate the Company Name list from the most descriptive or best Location Name that represents the multiple locations. Currently I am thinking I need to generate a list of Location Names that are similar, and then and go through that list by hand. Any suggestions on how I can approach this is appreciated. @Neall, Thank you for your statement, but unfortunately, each location name is distinct, there are no duplicate location names, only similar. So in the results from your statement "repcount" is 1 in each row. @yukondude, Your step 4 is the heart of my question.
[ "I've had to do this before. The only real way to do it is to manually match up the various locations. Use your database's console interface and grouping select statements. First, add your \"Company Name\" field. Then:\nSELECT count(*) AS repcount, \"Location Name\" FROM mytable\n WHERE \"Company Name\" IS NULL\n GROUP BY \"Location Name\"\n ORDER BY repcount DESC\n LIMIT 5;\n\nFigure out what company the location at the top of the list belongs to and then update your company name field with an UPDATE ... WHERE \"Location Name\" = \"The Location\" statement.\nP.S. - You should really break your company names and location names out into separate tables and refer to them by their primary keys.\nUpdate: - Wow - no duplicates? How many records do you have?\n", "Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames.\n\nUpdate\n\nThere is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations.\n\nOkay... try this:\n\nBuild a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use regular expressions for this. Store this list in a separate table.\nSort that list alphabetically and (manually) determine which entries should be CompanyNames.\nCompare each CompanyName to each LocationName and come up with a match score (use Levenshtein or some other string matching algo). Store the result in a separate table.\nSet a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName.\nManually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful.\n\nThe whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand.\n", "I was going to recommend some complicated token matching algorithm but it's really tricky to get right and if you're data does not have a lot of correlation (typos, etc) then it's not going to give very good results.\nI would recommend you submit a job to the Amazon Mechanical Turk and let a human sort it out.\n", "Ideally, you'd probably want a separate table named Company and then a company_id column in this \"Location\" table that is a foreign key to the Company table's primary key, likely called id. That would avoid a fair bit of text duplication in this table (over 18,000 rows, an integer foreign key would save quite a bit of space over a varchar column).\nBut you're still faced with a method for loading that Company table and then properly associating it with the rows in Location. There's no general solution, but you could do something along these lines:\n\nCreate the Company table, with an id column that auto-increments (depends on your RDBMS).\nFind all of the unique company names and insert them into Company.\nAdd a column, company_id, to Location that accepts NULLs (for now) and that is a foreign key of the Company.id column.\nFor each row in Location, determine the corresponding company, and UPDATE that row's company_id column with that company's id. This is likely the most challenging step. If your data is like what you show in the example, you'll likely have to take many runs at this with various string matching approaches.\nOnce all rows in Location have a company_id value, then you can ALTER the Company table to add a NOT NULL constraint to the company_id column (assuming that every location must have a company, which seems reasonable).\n\nIf you can make a copy of your Location table, you can gradually build up a series of SQL statements to populate the company_id foreign key. If you make a mistake, you can just start over and rerun the script up to the point of failure.\n", "Yes, that step 4 from my previous post is a doozy.\nNo matter what, you're probably going to have to do some of this by hand, but you may be able to automate the bulk of it. For the example locations you gave, a query like the following would set the appropriate company_id value:\nUPDATE Location\nSET Company_ID = 1\nWHERE (LOWER(Location_Name) LIKE '%to_n shop%'\nOR LOWER(Location_Name) LIKE '%tts%')\nAND Company_ID IS NULL;\n\nI believe that would match your examples (I added the IS NULL part to not overwrite previously set Company_ID values), but of course in 18,000 rows you're going to have to be pretty inventive to handle the various combinations.\nSomething else that might help would be to use the names in Company to generate queries like the one above. You could do something like the following (in MySQL):\nSELECT CONCAT('UPDATE Location SET Company_ID = ',\n Company_ID, ' WHERE LOWER(Location_Name) LIKE ',\n LOWER(REPLACE(Company_Name), ' ', '%'), ' AND Company_ID IS NULL;')\nFROM Company;\n\nThen just run the statements that it produces. That could do a lot of the grunge work for you.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "database" ]
stackoverflow_0000006110_database.txt
Q: What's the difference in closure style There are two popular closure styles in javascript. The first I call anonymous constructor: new function() { var code... } and the inline executed function: (function() { var code... })(); are there differences in behaviour between those two? Is one "better" over the other? A: Both cases will execute the function, the only real difference is what the return value of the expression may be, and what the value of "this" will be inside the function. Basically behaviour of new expression Is effectively equivalent to var tempObject = {}; var result = expression.call(tempObject); if (result is not an object) result = tempObject; Although of course tempObject and result are transient values you can never see (they're implementation details in the interpreter), and there is no JS mechanism to do the "is not an object" check. Broadly speaking the "new function() { .. }" method will be slower due to the need to create the this object for the constructor. That said this should be not be a real difference as object allocation is not slow, and you shouldn't be using such code in hot code (due to the cost of creating the function object and associated closure). Edit: one thing i realised that i missed from this is that the tempObject will get expressions prototype, eg. (before the expression.call) tempObject.__proto__ = expression.prototype A: @Lance: the first one is also executing. Compare it with a named constructor: function Blah() { alert('blah'); } new Bla(); this is actually also executing code. The same goes for the anonymous constructor... But that was not the question ;-) A: They both create a closure by executing the code block. As a matter of style I much prefer the second for a couple of reasons: It's not immediately obvious by glancing at the first that the code will actually be executed; the line looks like it is creating a new function, rather than executing it as a constructor, but that's not what's actually happening. Avoid code that doesn't do what it looks like it's doing! Also the (function(){ ... })(); make nice bookend tokens so that you can immediately see that you're entering and leaving a closure scope. This is good because it alerts the programmer reading it to the scope change, and is especially useful if you're doing some postprocessing of the file, eg for minification. A: Well, I made a page like this: <html> <body> <script type="text/javascript"> var a = new function() { alert("method 1"); return "test"; }; var b = (function() { alert("method 2"); return "test"; })(); alert(a); //a is a function alert(b); //b is a string containing "test" </script> </body> </html> Surprisingly enough (to me anyway) it alerted both "method 1" and method 2". I didn't expect "method 1" to be alerted. The difference was what the values of a and b were. a was the function itself, while b was the string that the function returned.
What's the difference in closure style
There are two popular closure styles in javascript. The first I call anonymous constructor: new function() { var code... } and the inline executed function: (function() { var code... })(); are there differences in behaviour between those two? Is one "better" over the other?
[ "Both cases will execute the function, the only real difference is what the return value of the expression may be, and what the value of \"this\" will be inside the function.\nBasically behaviour of\nnew expression\n\nIs effectively equivalent to\nvar tempObject = {};\nvar result = expression.call(tempObject);\nif (result is not an object)\n result = tempObject;\n\nAlthough of course tempObject and result are transient values you can never see (they're implementation details in the interpreter), and there is no JS mechanism to do the \"is not an object\" check.\nBroadly speaking the \"new function() { .. }\" method will be slower due to the need to create the this object for the constructor.\nThat said this should be not be a real difference as object allocation is not slow, and you shouldn't be using such code in hot code (due to the cost of creating the function object and associated closure).\nEdit: one thing i realised that i missed from this is that the tempObject will get expressions prototype, eg. (before the expression.call) tempObject.__proto__ = expression.prototype\n", "@Lance: the first one is also executing. Compare it with a named constructor:\nfunction Blah() {\n alert('blah');\n}\nnew Bla();\n\nthis is actually also executing code. The same goes for the anonymous constructor...\nBut that was not the question ;-)\n", "They both create a closure by executing the code block. As a matter of style I much prefer the second for a couple of reasons:\nIt's not immediately obvious by glancing at the first that the code will actually be executed; the line looks like it is creating a new function, rather than executing it as a constructor, but that's not what's actually happening. Avoid code that doesn't do what it looks like it's doing!\nAlso the (function(){ ... })(); make nice bookend tokens so that you can immediately see that you're entering and leaving a closure scope. This is good because it alerts the programmer reading it to the scope change, and is especially useful if you're doing some postprocessing of the file, eg for minification.\n", "Well, I made a page like this:\n<html>\n<body>\n<script type=\"text/javascript\">\nvar a = new function() { \n alert(\"method 1\");\n\n return \"test\";\n};\n\nvar b = (function() {\n alert(\"method 2\");\n\n return \"test\";\n})();\n\nalert(a); //a is a function\nalert(b); //b is a string containing \"test\"\n\n</script>\n</body>\n</html>\n\nSurprisingly enough (to me anyway) it alerted both \"method 1\" and method 2\". I didn't expect \"method 1\" to be alerted. The difference was what the values of a and b were. a was the function itself, while b was the string that the function returned.\n" ]
[ 12, 5, 3, 0 ]
[ "Yes, there are differences between the two.\nBoth are anonymous functions and execute in the exact same way. But, the difference between the two is that in the second case scope of the variables is restricted to the anonymous function itself. There is no chance of accidentally adding variables to the global scope.\nThis implies that by using the second method, you are not cluttering up the global variables scope which is good as these global variable values can interfere with some other global variables that you may use in some other library or are being used in a third party library.\nExample:\n<html>\n<body>\n<script type=\"text/javascript\">\n\nnew function() { \na = \"Hello\";\nalert(a + \" Inside Function\");\n};\n\nalert(a + \" Outside Function\");\n\n(function() { \nvar b = \"World\";\nalert(b + \" Inside Function\");\n})();\n\nalert(b + \" Outside Function\");\n</script>\n</body>\n</html>\n\nIn the above code the output is something like:\n\nHello Inside Function\n Hello Outside Function\n World Inside Function\n\n... then, you get an error as 'b' is not defined outside the function!\nThus, I believe that the second method is better... safer!\n" ]
[ -4 ]
[ "javascript" ]
stackoverflow_0000006373_javascript.txt
Q: PHP array indexing: $array[$index] vs $array["$index"] vs $array["{$index}"] What is the difference, if any, between these methods of indexing into a PHP array: $array[$index] $array["$index"] $array["{$index}"] I'm interested in both the performance and functional differences. Update: (In response to @Jeremy) I'm not sure that's right. I ran this code: $array = array(100, 200, 300); print_r($array); $idx = 0; $array[$idx] = 123; print_r($array); $array["$idx"] = 456; print_r($array); $array["{$idx}"] = 789; print_r($array); And got this output: Array ( [0] => 100 [1] => 200 [2] => 300 ) Array ( [0] => 123 [1] => 200 [2] => 300 ) Array ( [0] => 456 [1] => 200 [2] => 300 ) Array ( [0] => 789 [1] => 200 [2] => 300 ) A: see @svec and @jeremy above. All array indices are of type 'int' first, then type 'string', and will be cast to that as PHP sees fit. Performance wise, $index should be faster than "$index" and "{$index}" (which are the same). Once you start a double-quote string, PHP will go into interpolation mode and treat it as a string first, but looking for variable markers ($, {}, etc) to replace from the local scope. This is why in most discussions, true 'static' strings should always be single quotes unless you need the escape-shortcuts like "\n" or "\t", because PHP will not need to try to interpolate the string at runtime and the full string can be compiled statically. In this case, doublequoting will first copy the $index into that string, then return the string, where directly using $index will just return the string. A: I timed the 3 ways of using an index like this: for ($ii = 0; $ii < 1000000; $ii++) { // TEST 1 $array[$idx] = $ii; // TEST 2 $array["$idx"] = $ii; // TEST 3 $array["{$idx}"] = $ii; } The first set of tests used $idx=0, the second set used $idx="0", and the third set used $idx="blah". Timing was done using microtime() diffs. I'm using WinXP, PHP 5.2, Apache 2.2, and Vim. :-) And here are the results: Using $idx = 0 $array[$idx] // time: 0.45435905456543 seconds $array["$idx"] // time: 1.0537171363831 seconds $array["{$idx}"] // time: 1.0621709823608 seconds ratio "$idx" / $idx // 2.3191287282497 ratio "{$idx}" / $idx // 2.3377348193858 Using $idx = "0" $array[$idx] // time: 0.5107250213623 seconds $array["$idx"] // time: 0.77445602416992 seconds $array["{$idx}"] // time: 0.77329802513123 seconds ratio "$idx" / $idx // = 1.5163855142717 ratio "{$idx}" / $idx // = 1.5141181512285 Using $idx = "blah" $array[$idx] // time: 0.48077392578125 seconds $array["$idx"] // time: 0.73676419258118 seconds $array["{$idx}"] // time: 0.71499705314636 seconds ratio "$idx" / $idx // = 1.5324545551923 ratio "{$idx}" / $idx // = 1.4871793473086 So $array[$idx] is the hands-down winner of the performance competition, at least on my machine. (The results were very repeatable, BTW, I ran it 3 or 4 times and got the same results.) A: I believe from a performance perspective that $array["$index"] is faster than $array[$index] See Best practices to optimize PHP code performance Don't believe everything you read so blindly... I think you misinterpreted that. The article says $array['index'] is faster than $array[index] where index is a string, not a variable. That's because if you don't wrap it in quotes PHP looks for a constant var and can't find one so assumes you meant to make it a string. A: When will the different indexing methods resolve to different indices? According to http://php.net/types.array, an array index can only be an integer or a string. If you try to use a float as an index, it will truncate it to integer. So if $index is a float with the value 3.14, then $array[$index] will evaluate to $array[3] and $array["$index"] will evaluate to $array['3.14']. Here is some code that confirms this: $array = array(3.14 => 'float', '3.14' => 'string'); print_r($array); $index = 3.14; echo $array[$index]."\n"; echo $array["$index"]."\n"; The output: Array([3] => float [3.14] => string) float string A: Response to the Update: Oh, you're right, I guess PHP must convert array index strings to numbers if they contain only digits. I tried this code: $array = array('1' => 100, '2' => 200, 1 => 300, 2 => 400); print_r($array); And the output was: Array([1] => 300 [2] => 400) I've done some more tests and found that if an array index (or key) is made up of only digits, it's always converted to an integer, otherwise it's a string. ejunker: Can you explain why that's faster? Doesn't it take the interpreter an extra step to parse "$index" into the string to use as an index instead of just using $index as the index? A: If $index is a string there is no difference because $index, "$index", and "{$index}" all evaluate to the same string. If $index is a number, for example 10, the first line will evaluate to $array[10] and the other two lines will evaluate to $array["10"] which is a different element than $array[10].
PHP array indexing: $array[$index] vs $array["$index"] vs $array["{$index}"]
What is the difference, if any, between these methods of indexing into a PHP array: $array[$index] $array["$index"] $array["{$index}"] I'm interested in both the performance and functional differences. Update: (In response to @Jeremy) I'm not sure that's right. I ran this code: $array = array(100, 200, 300); print_r($array); $idx = 0; $array[$idx] = 123; print_r($array); $array["$idx"] = 456; print_r($array); $array["{$idx}"] = 789; print_r($array); And got this output: Array ( [0] => 100 [1] => 200 [2] => 300 ) Array ( [0] => 123 [1] => 200 [2] => 300 ) Array ( [0] => 456 [1] => 200 [2] => 300 ) Array ( [0] => 789 [1] => 200 [2] => 300 )
[ "see @svec and @jeremy above. All array indices are of type 'int' first, then type 'string', and will be cast to that as PHP sees fit.\nPerformance wise, $index should be faster than \"$index\" and \"{$index}\" (which are the same). \nOnce you start a double-quote string, PHP will go into interpolation mode and treat it as a string first, but looking for variable markers ($, {}, etc) to replace from the local scope. This is why in most discussions, true 'static' strings should always be single quotes unless you need the escape-shortcuts like \"\\n\" or \"\\t\", because PHP will not need to try to interpolate the string at runtime and the full string can be compiled statically.\nIn this case, doublequoting will first copy the $index into that string, then return the string, where directly using $index will just return the string.\n", "I timed the 3 ways of using an index like this:\nfor ($ii = 0; $ii < 1000000; $ii++) {\n // TEST 1\n $array[$idx] = $ii;\n // TEST 2\n $array[\"$idx\"] = $ii;\n // TEST 3\n $array[\"{$idx}\"] = $ii;\n}\n\nThe first set of tests used $idx=0, the second set used $idx=\"0\", and the third set used $idx=\"blah\". Timing was done using microtime() diffs. I'm using WinXP, PHP 5.2, Apache 2.2, and Vim. :-)\nAnd here are the results:\nUsing $idx = 0\n$array[$idx] // time: 0.45435905456543 seconds\n$array[\"$idx\"] // time: 1.0537171363831 seconds\n$array[\"{$idx}\"] // time: 1.0621709823608 seconds\nratio \"$idx\" / $idx // 2.3191287282497\nratio \"{$idx}\" / $idx // 2.3377348193858\n\nUsing $idx = \"0\"\n$array[$idx] // time: 0.5107250213623 seconds\n$array[\"$idx\"] // time: 0.77445602416992 seconds\n$array[\"{$idx}\"] // time: 0.77329802513123 seconds\nratio \"$idx\" / $idx // = 1.5163855142717\nratio \"{$idx}\" / $idx // = 1.5141181512285\n\nUsing $idx = \"blah\"\n$array[$idx] // time: 0.48077392578125 seconds\n$array[\"$idx\"] // time: 0.73676419258118 seconds\n$array[\"{$idx}\"] // time: 0.71499705314636 seconds\nratio \"$idx\" / $idx // = 1.5324545551923\nratio \"{$idx}\" / $idx // = 1.4871793473086\n\nSo $array[$idx] is the hands-down winner of the performance competition, at least on my machine. (The results were very repeatable, BTW, I ran it 3 or 4 times and got the same results.)\n", "\nI believe from a performance\n perspective that $array[\"$index\"] is\n faster than $array[$index] See Best\n practices to optimize PHP code\n performance\n\nDon't believe everything you read so blindly... I think you misinterpreted that. The article says $array['index'] is faster than $array[index] where index is a string, not a variable. That's because if you don't wrap it in quotes PHP looks for a constant var and can't find one so assumes you meant to make it a string.\n", "\nWhen will the different indexing methods resolve to different indices?\n\nAccording to http://php.net/types.array, an array index can only be an integer or a string. If you try to use a float as an index, it will truncate it to integer. So if $index is a float with the value 3.14, then $array[$index] will evaluate to $array[3] and $array[\"$index\"] will evaluate to $array['3.14']. Here is some code that confirms this:\n$array = array(3.14 => 'float', '3.14' => 'string');\nprint_r($array);\n\n$index = 3.14;\necho $array[$index].\"\\n\";\necho $array[\"$index\"].\"\\n\";\n\nThe output:\nArray([3] => float [3.14] => string)\nfloat\nstring\n\n", "Response to the Update:\nOh, you're right, I guess PHP must convert array index strings to numbers if they contain only digits. I tried this code:\n$array = array('1' => 100, '2' => 200, 1 => 300, 2 => 400);\nprint_r($array);\n\nAnd the output was:\nArray([1] => 300 [2] => 400)\n\nI've done some more tests and found that if an array index (or key) is made up of only digits, it's always converted to an integer, otherwise it's a string.\nejunker:\nCan you explain why that's faster? Doesn't it take the interpreter an extra step to parse \"$index\" into the string to use as an index instead of just using $index as the index?\n", "If $index is a string there is no difference because $index, \"$index\", and \"{$index}\" all evaluate to the same string. If $index is a number, for example 10, the first line will evaluate to $array[10] and the other two lines will evaluate to $array[\"10\"] which is a different element than $array[10].\n" ]
[ 34, 29, 7, 6, 1, 0 ]
[ "I believe from a performance perspective that $array[\"$index\"] is faster than $array[$index] See Best practices to optimize PHP code performance\nAnother variation that I use sometimes when I have an array inside a string is:\n$str = \"this is my string {$array[\"$index\"]}\";\n\nEdit: What I meant to say is $row[’id’] is faster than $row[id]\n" ]
[ -6 ]
[ "php", "syntax" ]
stackoverflow_0000006628_php_syntax.txt
Q: Automate builds for Java RCP for deployment with JNLP I've found many sources that talk about the automated Eclipse PDE process. I feel these sources don't do a good job explaining what's going on. I can create the deployable package, in a semi-manual process via the Feature Export. The automated process requires knowledge of how the org.eclipse.pde.build scripts work. I have gotten a build created, but not for JNLP. Questions: Has anyone ever deployed RCP through JNLP? Were you able to automate the builds? A: I haven't done this before, but I found this site on the web giving an explanation.
Automate builds for Java RCP for deployment with JNLP
I've found many sources that talk about the automated Eclipse PDE process. I feel these sources don't do a good job explaining what's going on. I can create the deployable package, in a semi-manual process via the Feature Export. The automated process requires knowledge of how the org.eclipse.pde.build scripts work. I have gotten a build created, but not for JNLP. Questions: Has anyone ever deployed RCP through JNLP? Were you able to automate the builds?
[ "I haven't done this before, but I found this site on the web giving an explanation.\n" ]
[ 5 ]
[]
[]
[ "build_automation", "java", "jnlp", "rcp" ]
stackoverflow_0000005855_build_automation_java_jnlp_rcp.txt
Q: Why doesn't my favicon display for my web site? I have a website that I've just uploaded onto the Internet. When I browse to the site using Firefox 3.0.1 on Ubuntu I don't see the favicon; Firefox 3.0.1 on WinXP does display it. Why isn't the favicon displaying under Ubuntu? It's a favicon.ico file in the root directory, not referenced in the meta tags; would it work better as a GIF? A: Previously, there was no favicon. The browser cached the lack of favicon. Clear the Firefox cache, and all is well. A: Try taking a look at this previous Question, which shows you must put two meta tags in your html in order for it to work across all browsers and operating systems.
Why doesn't my favicon display for my web site?
I have a website that I've just uploaded onto the Internet. When I browse to the site using Firefox 3.0.1 on Ubuntu I don't see the favicon; Firefox 3.0.1 on WinXP does display it. Why isn't the favicon displaying under Ubuntu? It's a favicon.ico file in the root directory, not referenced in the meta tags; would it work better as a GIF?
[ "Previously, there was no favicon. The browser cached the lack of favicon. Clear the Firefox cache, and all is well.\n", "Try taking a look at this previous Question, which shows you must put two meta tags in your html in order for it to work across all browsers and operating systems.\n" ]
[ 8, 5 ]
[]
[]
[ "favicon", "firefox", "ubuntu" ]
stackoverflow_0000006732_favicon_firefox_ubuntu.txt
Q: Master Pages for large web sites I've just been learning about master pages in ASP.NET 2.0. They sound great, but how well do they work in practice? Does anybody have experience of using them for a large web site? A: I'm pretty sure I've only used master pages in the context of ASP.NET MVC so I'm not sure if it differs from web forms but in my experience they are not only excellent but I couldn't imagine not using them. Master pages are code inheritance to web pages. A: They are a must if you want to maintain the look of your application throughout all the pages in the application. They are fairly easy to use: First of all, design your master page and define where you want the content to be placed: <%@ Master ... %> <%-- HTML code --%> <asp:ContentPlaceHolder id="plhMainContent" runat="server" /> <%-- HTML code --%> You can have any number of place holders, just give them proper identifiers because you'll need them later. Then when creating an aspx page, you will need to mention which master page to use and in which place holder to put what content. <%@ Page ... master="~/MasterPage.master" ... %> <asp:Content ID="ContentIdentifier" ContentPlaceholderid="plhMainContent" runat="server"> <%-- More HTML here --%> <%-- Insert web controls here --%> </asp:content> Just make sure you link to the correct master page and that your content refers to the correct place holder. Master pages save a lot of time and are very powerful. There are tutorials out there, learn the power of place holders and web controls. Where I work we use master pages and web controls extensively for some major corporations, it gives us an edge when comparing with what other companies can offer. A: They are extremely useful, especially in a CMS environment and for large sites, and as MattMitchell says it's inconceivable that you would build a large site without them. Select template, each template has different editable areas, job done. Master pages can also be inherited, so you can have a Style.Master, derive a Header.Master, then derive all of your layout-based templates from that. A: Master Pages have made building template-able websites easy. I think the trickiest part in building a website using master pages is knowing when to put things into the master page and when to put things into the ContentPlaceHolder on the child page. Generally, dynamic stuff goes into the placeholder while static items go into the master page, but there is sometimes a gray area. It's mostly a design/architecture question. A: In practise I don't often find sites developed not using MasterPages. They allow simple and easy manipulation of site look and feel and also makes navigation elements and shared content pieces a breeze. ASP.Net 3.5 even allows multiple contentpages and manipulation of header sections across a single master pages. I rate it as being in the Top 10 tools for Web Developers using ASP.Net. Even ASP.Net MVC uses MasterPages and all the samples Paul Haack and his crowd put's together makes use of them. A: I echo other voices in here. I have used Master Pages in 2.0 and the feature have been great to me. I have been embedding banners, standardized background, captures from Active Dir and other JavaScript features on it for use throughout the app, maintaining the look and feel consistency and without the need to duplicate the effort on multiple pages. Great feature.
Master Pages for large web sites
I've just been learning about master pages in ASP.NET 2.0. They sound great, but how well do they work in practice? Does anybody have experience of using them for a large web site?
[ "I'm pretty sure I've only used master pages in the context of ASP.NET MVC so I'm not sure if it differs from web forms but in my experience they are not only excellent but I couldn't imagine not using them. Master pages are code inheritance to web pages.\n", "They are a must if you want to maintain the look of your application throughout all the pages in the application.\nThey are fairly easy to use:\nFirst of all, design your master page and define where you want the content to be placed:\n<%@ Master ... %>\n\n<%-- HTML code --%>\n<asp:ContentPlaceHolder id=\"plhMainContent\" runat=\"server\" />\n<%-- HTML code --%>\n\nYou can have any number of place holders, just give them proper identifiers because you'll need them later.\nThen when creating an aspx page, you will need to mention which master page to use and in which place holder to put what content.\n<%@ Page ... master=\"~/MasterPage.master\" ... %>\n\n<asp:Content ID=\"ContentIdentifier\" ContentPlaceholderid=\"plhMainContent\" runat=\"server\">\n <%-- More HTML here --%>\n <%-- Insert web controls here --%>\n</asp:content>\n\nJust make sure you link to the correct master page and that your content refers to the correct place holder.\nMaster pages save a lot of time and are very powerful. There are tutorials out there, learn the power of place holders and web controls.\nWhere I work we use master pages and web controls extensively for some major corporations, it gives us an edge when comparing with what other companies can offer.\n", "They are extremely useful, especially in a CMS environment and for large sites, and as MattMitchell says it's inconceivable that you would build a large site without them.\nSelect template, each template has different editable areas, job done. Master pages can also be inherited, so you can have a Style.Master, derive a Header.Master, then derive all of your layout-based templates from that.\n", "Master Pages have made building template-able websites easy.\nI think the trickiest part in building a website using master pages is knowing when to put things into the master page and when to put things into the ContentPlaceHolder on the child page. Generally, dynamic stuff goes into the placeholder while static items go into the master page, but there is sometimes a gray area. It's mostly a design/architecture question.\n", "In practise I don't often find sites developed not using MasterPages. They allow simple and easy manipulation of site look and feel and also makes navigation elements and shared content pieces a breeze.\nASP.Net 3.5 even allows multiple contentpages and manipulation of header sections across a single master pages.\nI rate it as being in the Top 10 tools for Web Developers using ASP.Net.\nEven ASP.Net MVC uses MasterPages and all the samples Paul Haack and his crowd put's together makes use of them.\n", "I echo other voices in here. I have used Master Pages in 2.0 and the feature have been great to me. I have been embedding banners, standardized background, captures from Active Dir and other JavaScript features on it for use throughout the app, maintaining the look and feel consistency and without the need to duplicate the effort on multiple pages. Great feature.\n" ]
[ 6, 5, 1, 0, 0, 0 ]
[]
[]
[ ".net", "asp.net", "asp.net_2.0", "master_pages" ]
stackoverflow_0000006719_.net_asp.net_asp.net_2.0_master_pages.txt
Q: How can I enable disabled radio buttons? The following code works great in IE, but not in FF or Safari. I can't for the life of me work out why. The code is supposed to disable radio buttons if you select the "Disable 2 radio buttons" option. It should enable the radio buttons if you select the "Enable both radio buttons" option. These both work... However, if you don't use your mouse to move between the 2 options ("Enable..." and "Disable...") then the radio buttons do not appear to be disabled or enabled correctly, until you click anywhere else on the page (not on the radio buttons themselves). If anyone has time/is curious/feeling helpful, please paste the code below into an html page and load it up in a browser. It works great in IE, but the problem manifests itself in FF (3 in my case) and Safari, all on Windows XP. function SetLocationOptions() { var frmTemp = document.frm; var selTemp = frmTemp.user; if (selTemp.selectedIndex >= 0) { var myOpt = selTemp.options[selTemp.selectedIndex]; if (myOpt.attributes[0].nodeValue == '1') { frmTemp.transfer_to[0].disabled = true; frmTemp.transfer_to[1].disabled = true; frmTemp.transfer_to[2].checked = true; } else { frmTemp.transfer_to[0].disabled = false; frmTemp.transfer_to[1].disabled = false; } } } <form name="frm" action="coopfunds_transfer_request.asp" method="post"> <select name="user" onchange="javascript: SetLocationOptions()"> <option value="" />Choose One <option value="58" user_is_tsm="0" />Enable both radio buttons <option value="157" user_is_tsm="1" />Disable 2 radio buttons </select> <br /><br /> <input type="radio" name="transfer_to" value="fund_amount1" />Premium&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="fund_amount2" />Other&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="both" CHECKED />Both <br /><br /> <input type="button" class="buttonStyle" value="Submit Request" /> </form> A: To get FF to mimic IE's behavior when using the keyboard, you can use the keyup event on the select box. In your example (I am not a fan of attaching event handlers this way, but that's another topic), it would be like this: <select name="user" id="selUser" onchange="javascript:SetLocationOptions()" onkeyup="javascript:SetLocationOptions()"> A: Well, IE has a somewhat non-standard object model; what you're doing shouldn't work but you're getting away with it because IE is being nice to you. In Firefox and Safari, document.frm in your code evaluates to undefined. You need to be using id values on your form elements and use document.getElementById('whatever') to return a reference to them instead of referring to non-existent properties of the document object. So this works a bit better and may do what you're after: Line 27: <form name="frm" id="f" ... Line 6: var frmTemp = document.getElementById('f'); But you might want to check out this excellent book if you want to learn more about the right way of going about things: DOM Scripting by Jeremy Keith Also while we're on the subject, Bulletproof Ajax by the same author is also deserving of a place on your bookshelf as is JavaScript: The Good Parts by Doug Crockford A: Why not grab one of the AJAX scripting libraries, they abstract away a lot of the cross browser DOM scripting black magic and make life a hell of a lot easier.
How can I enable disabled radio buttons?
The following code works great in IE, but not in FF or Safari. I can't for the life of me work out why. The code is supposed to disable radio buttons if you select the "Disable 2 radio buttons" option. It should enable the radio buttons if you select the "Enable both radio buttons" option. These both work... However, if you don't use your mouse to move between the 2 options ("Enable..." and "Disable...") then the radio buttons do not appear to be disabled or enabled correctly, until you click anywhere else on the page (not on the radio buttons themselves). If anyone has time/is curious/feeling helpful, please paste the code below into an html page and load it up in a browser. It works great in IE, but the problem manifests itself in FF (3 in my case) and Safari, all on Windows XP. function SetLocationOptions() { var frmTemp = document.frm; var selTemp = frmTemp.user; if (selTemp.selectedIndex >= 0) { var myOpt = selTemp.options[selTemp.selectedIndex]; if (myOpt.attributes[0].nodeValue == '1') { frmTemp.transfer_to[0].disabled = true; frmTemp.transfer_to[1].disabled = true; frmTemp.transfer_to[2].checked = true; } else { frmTemp.transfer_to[0].disabled = false; frmTemp.transfer_to[1].disabled = false; } } } <form name="frm" action="coopfunds_transfer_request.asp" method="post"> <select name="user" onchange="javascript: SetLocationOptions()"> <option value="" />Choose One <option value="58" user_is_tsm="0" />Enable both radio buttons <option value="157" user_is_tsm="1" />Disable 2 radio buttons </select> <br /><br /> <input type="radio" name="transfer_to" value="fund_amount1" />Premium&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="fund_amount2" />Other&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="both" CHECKED />Both <br /><br /> <input type="button" class="buttonStyle" value="Submit Request" /> </form>
[ "To get FF to mimic IE's behavior when using the keyboard, you can use the keyup event on the select box. In your example (I am not a fan of attaching event handlers this way, but that's another topic), it would be like this:\n<select name=\"user\" id=\"selUser\" onchange=\"javascript:SetLocationOptions()\" onkeyup=\"javascript:SetLocationOptions()\">\n\n", "Well, IE has a somewhat non-standard object model; what you're doing shouldn't work but you're getting away with it because IE is being nice to you. In Firefox and Safari, document.frm in your code evaluates to undefined.\nYou need to be using id values on your form elements and use document.getElementById('whatever') to return a reference to them instead of referring to non-existent properties of the document object.\nSo this works a bit better and may do what you're after:\nLine 27: <form name=\"frm\" id=\"f\" ...\n\nLine 6: var frmTemp = document.getElementById('f');\n\nBut you might want to check out this excellent book if you want to learn more about the right way of going about things: DOM Scripting by Jeremy Keith\nAlso while we're on the subject, Bulletproof Ajax by the same author is also deserving of a place on your bookshelf as is JavaScript: The Good Parts by Doug Crockford\n", "Why not grab one of the AJAX scripting libraries, they abstract away a lot of the cross browser DOM scripting black magic and make life a hell of a lot easier.\n" ]
[ 6, 3, 1 ]
[]
[]
[ "html", "javascript", "radio_button" ]
stackoverflow_0000006441_html_javascript_radio_button.txt
Q: How do I best share an embeddable form in VB6? Is there a good way to create a form in VB6 that can easily be embedded inside other forms? On a few occasions recently, I've wanted to design and code a Form object that I could plug into several other "parent" forms. My goal is to create a centralized piece of code for managing several UI components in a particular way, and then be able to use that (both the UI layout and the logic) in more than one place. I'm certainly willing to use code (rather than the Design View) to load the child form. The best I've come up with so far is to pull all of the interesting logic for the child form into a Class Module, and have each parent form lay out the UI (in a Picture control, perhaps) and pass that Picture object into the class module. The class then knows how to operate on the picture, and it assumes that all its expected pieces have been laid out appropriately. This approach has several downsides, and I'd like something a bit more elegant. A: Take a look at VB6 UserControls; I think they are exactly what you need. You can create a UserControl within your project, add controls and code to that control, and then insert it onto a form just like standard VB6 controls. I've used UserControls to share UI layouts on many occasions and it works great.
How do I best share an embeddable form in VB6?
Is there a good way to create a form in VB6 that can easily be embedded inside other forms? On a few occasions recently, I've wanted to design and code a Form object that I could plug into several other "parent" forms. My goal is to create a centralized piece of code for managing several UI components in a particular way, and then be able to use that (both the UI layout and the logic) in more than one place. I'm certainly willing to use code (rather than the Design View) to load the child form. The best I've come up with so far is to pull all of the interesting logic for the child form into a Class Module, and have each parent form lay out the UI (in a Picture control, perhaps) and pass that Picture object into the class module. The class then knows how to operate on the picture, and it assumes that all its expected pieces have been laid out appropriately. This approach has several downsides, and I'd like something a bit more elegant.
[ "Take a look at VB6 UserControls; I think they are exactly what you need. You can create a UserControl within your project, add controls and code to that control, and then insert it onto a form just like standard VB6 controls. I've used UserControls to share UI layouts on many occasions and it works great.\n" ]
[ 9 ]
[]
[]
[ "code_reuse", "forms", "vb6" ]
stackoverflow_0000006913_code_reuse_forms_vb6.txt