The DiscountASP.NET ASP.NET hosting platform supports AJAX hosting and you can deploy your AJAX powered web applications!
ASP.NET AJAX (formerly called ATLAS) is a free framework for building rich interactive, cross-browser web applications. This Microsoft technology integrates cross-browser client script libraries with the ASP.NET 2.0 framework.
ASP.NET AJAX makes it possible to take advantage of AJAX (Asynchronous JavaScript and XML) techniques to create ASP.NET pages with a rich user interfaces. Including both client-side and server-side components, ASP.NET AJAX allows creating web applications in ASP.NET 2.0 which can update data on the web page without a complete reload of the page.
Get 3 months FREE AJAX Hosting + No Setup Today!
Saturday, March 10, 2007
Friday, March 9, 2007
GPS topo maps of Bulgaria
NEW!!! Online map
The first version of BG Topo Maps for Magellan GPS receivers was releised.
The map is compatible with Explorist, Meridian and Sporttrack series. The data is based on BG Topo Maps 2.11 - Roads, Railroads, Streets, Trails, Hydrography, Shoreline, Country border, Farwaters, and more than 12 000 searchable POIs. The data for Relef and Landcover will be added in next versions.
Download BG Topo Maps for Magellan
Here you can find free GPS topo maps of Bulgaria for Garmin and Magellan GPS receivers and online topographic map of Bulgaria.
The first version of BG Topo Maps for Magellan GPS receivers was releised.
The map is compatible with Explorist, Meridian and Sporttrack series. The data is based on BG Topo Maps 2.11 - Roads, Railroads, Streets, Trails, Hydrography, Shoreline, Country border, Farwaters, and more than 12 000 searchable POIs. The data for Relef and Landcover will be added in next versions.
Download BG Topo Maps for Magellan
Here you can find free GPS topo maps of Bulgaria for Garmin and Magellan GPS receivers and online topographic map of Bulgaria.
Why use ASP.NET AJAX
By Omar Al Zabir
When others see Pageflakes, the first question they ask me is: "Why did you not use Protopage or Dojo library? Why Atlas?" Microsoft Atlas (renamed to ASP.NET AJAX) is a very promising AJAX framework. They are putting a lot of effort on it, making lots of reusable components that can really save you a lot of time and give your web application a complete face lift at reasonably low effort or changes. It integrates with ASP.NET very well, and it is compatible with the ASP.NET Membership and Profile provider. The AJAX Control Toolkit project contains 28 extenders which you can drag & drop on your page, tweak some properties, and add pretty cool effects on the page. Check out the examples to see how powerful the ASP.NET AJAX framework has really become.
When we first started developing Pageflakes, Atlas was in infant stage. We were only able to use the page method and Web Service method call features of Atlas. We had to make our own drag & drop, component architecture, popups, collapse/expand features etc. But now, you can have all these from Atlas and thus save a lot of development time. The web service proxy feature of Atlas is a marvel. You can point a<script> tag to a .asmx file and you get a JavaScript class generated right out of the web service definition. The JavaScript class contains the exact methods that you have on the web service class. This makes it really easy to add/remove new web services, and add/remove methods in web services which does not require any changes on the client side. It also offers a lot of control over the AJAX calls, and provides rich exception trapping feature on the JavaScript. Server side exceptions are nicely thrown to the client side JavaScript code, and you can trap them and show nicely formatted error messages to the user. Atlas works really well with ASP.NET 2.0, eliminating the integration problem completely. You need not worry about authentication and authorization on page methods and web service methods. So, you save a lot of code on the client side (of course, the Atlas Runtime is huge for this reason), and you can concentrate more on your own code than building up all these framework related code.
The recent version of Atlas works nicely with ASP.NET Membership and Profile services, giving you login/logout features from JavaScript without requiring page postbacks, and you can read/write Profile objects directly from JavaScript. This comes very handy when you heavily use ASP.NET membership and profile providers in your web application, which we do at Pageflakes.
On earlier versions of Atlas, there was no way to make HTTP GET calls. All calls were HTTP POST, and thus quite expensive calls. Now, you can say which calls should be HTTP GET. Once you have HTTP GET, you can utilize HTTP response caching features which I will show you soon.
Read More
When others see Pageflakes, the first question they ask me is: "Why did you not use Protopage or Dojo library? Why Atlas?" Microsoft Atlas (renamed to ASP.NET AJAX) is a very promising AJAX framework. They are putting a lot of effort on it, making lots of reusable components that can really save you a lot of time and give your web application a complete face lift at reasonably low effort or changes. It integrates with ASP.NET very well, and it is compatible with the ASP.NET Membership and Profile provider. The AJAX Control Toolkit project contains 28 extenders which you can drag & drop on your page, tweak some properties, and add pretty cool effects on the page. Check out the examples to see how powerful the ASP.NET AJAX framework has really become.
When we first started developing Pageflakes, Atlas was in infant stage. We were only able to use the page method and Web Service method call features of Atlas. We had to make our own drag & drop, component architecture, popups, collapse/expand features etc. But now, you can have all these from Atlas and thus save a lot of development time. The web service proxy feature of Atlas is a marvel. You can point a<script> tag to a .asmx file and you get a JavaScript class generated right out of the web service definition. The JavaScript class contains the exact methods that you have on the web service class. This makes it really easy to add/remove new web services, and add/remove methods in web services which does not require any changes on the client side. It also offers a lot of control over the AJAX calls, and provides rich exception trapping feature on the JavaScript. Server side exceptions are nicely thrown to the client side JavaScript code, and you can trap them and show nicely formatted error messages to the user. Atlas works really well with ASP.NET 2.0, eliminating the integration problem completely. You need not worry about authentication and authorization on page methods and web service methods. So, you save a lot of code on the client side (of course, the Atlas Runtime is huge for this reason), and you can concentrate more on your own code than building up all these framework related code.
The recent version of Atlas works nicely with ASP.NET Membership and Profile services, giving you login/logout features from JavaScript without requiring page postbacks, and you can read/write Profile objects directly from JavaScript. This comes very handy when you heavily use ASP.NET membership and profile providers in your web application, which we do at Pageflakes.
On earlier versions of Atlas, there was no way to make HTTP GET calls. All calls were HTTP POST, and thus quite expensive calls. Now, you can say which calls should be HTTP GET. Once you have HTTP GET, you can utilize HTTP response caching features which I will show you soon.
Read More
Wednesday, March 7, 2007
What do BTW, FAQ, FYI and other acronyms mean?
These are all abbreviations for specific phrases commonly used in informal written computer correspondence, online forums and boards, and online gaming. Following is a list of some common acronyms and their meanings:
AFAIC As far as I'm concerned
AFAIK As far as I know
AFK Away from keyboard
BRB Be right back
BTDT Been there, done that
BTW By the way
BUAG Butt-ugly ASCII graphic
C/C Comments and criticism
EOM End of message
FAQ Frequently Asked Question. When people say "the FAQ", they are generally referring to a list of answers to Frequently Asked Questions. These are posted monthly on many newsgroups or mailing lists to reduce discussion of topics that have already been thoroughly covered. It's a good idea to look at a FAQ file for a newsgroup or mailing list before participating in it. For help in finding FAQ files, see the Knowledge Base document Where can I find a repository of Usenet FAQ files? A large list of all known FAQ postings in newsgroups is also posted periodically in the Usenet newsgroup news.admin.
FTW For the win
FWIW For what it's worth
FYI For your information
HTH Hope this helps
IANAL I am not a lawyer
IIRC If I recall correctly
IMHO In my humble opinion
IMNSHO In my not so humble opinion
IMO In my opinion
IOW In other words
l33t or 1337 From "elite". This has become a term used to describe the informal communication of Internet gaming. L33t speak is easily identified by the substitution of number and other characters for regular letters; e.g., hackers becomes h4XX0rz.
LFG Looking for group, usually used in MMORPGs
LMAO Laughing my butt off
LOL Laughing out loud
MMORPG Massive, multiplayer, online role-playing game, such as World of Warcraft or Star Wars Galaxies
MOTAS Member of the appropriate sex
MOTOS Member of the opposite sex
MOTSS Member of the same sex
NG Newsgroup
n00b From "newbie", meaning a newcomer not yet familiar with the rules
OTOH On the other hand
PWN Usage of the term "own", as in "I PWNed you!"
RL Real Life, as opposed to the Internet
ROFL Rolling on the floor laughing
ROFLMAO Rolling on the floor laughing my butt off
RTFM Read The Fine Manual. This may be interpreted as: "You have asked a question which would best be answered by consulting the manual (or FAQ, or other help files), a copy of which should be in your possession. The question you have asked is clearly answered in the manual and you are wasting time asking people to read it to you." It's good netiquette to mail this type of answer to another user rather than post it in public messages.
SO Significant other, used to refer to someone's romantic partner without making any assumptions about gender or legal status
TLA Three letter acronym
TTFN Ta ta for now
TTYL Talk to you later
w00t An expression of joy
WFN Wrong forum, noob
WTF What the heck
YMMH You might mean here
YMMV Your mileage may vary
{g} Grin
{BG} Big grin
AFAIC As far as I'm concerned
AFAIK As far as I know
AFK Away from keyboard
BRB Be right back
BTDT Been there, done that
BTW By the way
BUAG Butt-ugly ASCII graphic
C/C Comments and criticism
EOM End of message
FAQ Frequently Asked Question. When people say "the FAQ", they are generally referring to a list of answers to Frequently Asked Questions. These are posted monthly on many newsgroups or mailing lists to reduce discussion of topics that have already been thoroughly covered. It's a good idea to look at a FAQ file for a newsgroup or mailing list before participating in it. For help in finding FAQ files, see the Knowledge Base document Where can I find a repository of Usenet FAQ files? A large list of all known FAQ postings in newsgroups is also posted periodically in the Usenet newsgroup news.admin.
FTW For the win
FWIW For what it's worth
FYI For your information
HTH Hope this helps
IANAL I am not a lawyer
IIRC If I recall correctly
IMHO In my humble opinion
IMNSHO In my not so humble opinion
IMO In my opinion
IOW In other words
l33t or 1337 From "elite". This has become a term used to describe the informal communication of Internet gaming. L33t speak is easily identified by the substitution of number and other characters for regular letters; e.g., hackers becomes h4XX0rz.
LFG Looking for group, usually used in MMORPGs
LMAO Laughing my butt off
LOL Laughing out loud
MMORPG Massive, multiplayer, online role-playing game, such as World of Warcraft or Star Wars Galaxies
MOTAS Member of the appropriate sex
MOTOS Member of the opposite sex
MOTSS Member of the same sex
NG Newsgroup
n00b From "newbie", meaning a newcomer not yet familiar with the rules
OTOH On the other hand
PWN Usage of the term "own", as in "I PWNed you!"
RL Real Life, as opposed to the Internet
ROFL Rolling on the floor laughing
ROFLMAO Rolling on the floor laughing my butt off
RTFM Read The Fine Manual. This may be interpreted as: "You have asked a question which would best be answered by consulting the manual (or FAQ, or other help files), a copy of which should be in your possession. The question you have asked is clearly answered in the manual and you are wasting time asking people to read it to you." It's good netiquette to mail this type of answer to another user rather than post it in public messages.
SO Significant other, used to refer to someone's romantic partner without making any assumptions about gender or legal status
TLA Three letter acronym
TTFN Ta ta for now
TTYL Talk to you later
w00t An expression of joy
WFN Wrong forum, noob
WTF What the heck
YMMH You might mean here
YMMV Your mileage may vary
{g} Grin
{BG} Big grin
Tuesday, March 6, 2007
Authentication - Security Best Practices
Authentication modules are the most exploited pieces in a web application. We look at ten good practices that ensure your authentication system is safe against an attack.
1. Use an external authentication service provider like LDAP if it fits into your architecture. It’s simpler to reuse a tried and tested authentication system than writing your own from scratch.
2. Use SSL to transmit all sensitive authentication information from the browser to the web server. The ‘Login’page, the ‘Change Password’ page and the ‘Forgot Password’ page all sent authentication details and must be SSL enabled. The session id token is a short-lived password with the lifetime of a session, and should also be sent over SSL. Marking the session id cookie with the “secure” attribute ensures that it is sent only on SSL enabled requests.
3. Validate all user inputs, and be doubly careful with login inputs. Define acceptable inputs including their lengths (eg. Usernames are alphanumeric and less than 16 characters). Reject everything that does not fit that rule. Attacks like SQL Injection have devastating effects and are difficult to prevent unless you follow a strict policy and accept only what is allowed.
4. Use a salted hash technique for transmitting passwords, even when you’re using SSL. This ensures two things: one, you’re not vulnerable to a replay attack and two, passwords cannot be stolen from the submit cache of the browser.
5. If your application manages credential storage, ensure that only the 1-way hashes of passwords are stored in the database, and that the table/file that stores the passwords and keys are “write”-able only to the application.
6. Separate the authentication page from the next-page-after-login, and use redirection during login to reach the next page. It is possible to steal passwords if such redirection is not used.
7. Use the session management framework provided by the server to manage your sessions, instead of writing one your own. Session tokens should be strongly random, and expired after a defined period of inactivity. It’s better to re-authenticate the user if there’s a period of inactivity than set long expiry windows.
8. Design your “Forgot Password” feature to send a mail to the authorized email id with a short-lived link that lets the user reset the password. Choose your hint questions so that their answers are not limited in their range (“favorite color” has just a few choices!). Better still, allow the user to have their custom hint question.
9. The “Remember Password” feature is convenient, but very unsafe. Avoid it as far as possible. But if the business requires that feature, do not store the password itself in the cookie, as cookies are stored unencrypted on the client. And remember to use a token that is set to a new value after every login. If that token has a very long lifetime, it soon becomes equivalent to the password. And warn users not to use it from shared computers.
10. Log all authentication failures in detail, including the IP address and the exact HTTP request that triggered it. This aids forensic analysis in case your application is compromised.
1. Use an external authentication service provider like LDAP if it fits into your architecture. It’s simpler to reuse a tried and tested authentication system than writing your own from scratch.
2. Use SSL to transmit all sensitive authentication information from the browser to the web server. The ‘Login’page, the ‘Change Password’ page and the ‘Forgot Password’ page all sent authentication details and must be SSL enabled. The session id token is a short-lived password with the lifetime of a session, and should also be sent over SSL. Marking the session id cookie with the “secure” attribute ensures that it is sent only on SSL enabled requests.
3. Validate all user inputs, and be doubly careful with login inputs. Define acceptable inputs including their lengths (eg. Usernames are alphanumeric and less than 16 characters). Reject everything that does not fit that rule. Attacks like SQL Injection have devastating effects and are difficult to prevent unless you follow a strict policy and accept only what is allowed.
4. Use a salted hash technique for transmitting passwords, even when you’re using SSL. This ensures two things: one, you’re not vulnerable to a replay attack and two, passwords cannot be stolen from the submit cache of the browser.
5. If your application manages credential storage, ensure that only the 1-way hashes of passwords are stored in the database, and that the table/file that stores the passwords and keys are “write”-able only to the application.
6. Separate the authentication page from the next-page-after-login, and use redirection during login to reach the next page. It is possible to steal passwords if such redirection is not used.
7. Use the session management framework provided by the server to manage your sessions, instead of writing one your own. Session tokens should be strongly random, and expired after a defined period of inactivity. It’s better to re-authenticate the user if there’s a period of inactivity than set long expiry windows.
8. Design your “Forgot Password” feature to send a mail to the authorized email id with a short-lived link that lets the user reset the password. Choose your hint questions so that their answers are not limited in their range (“favorite color” has just a few choices!). Better still, allow the user to have their custom hint question.
9. The “Remember Password” feature is convenient, but very unsafe. Avoid it as far as possible. But if the business requires that feature, do not store the password itself in the cookie, as cookies are stored unencrypted on the client. And remember to use a token that is set to a new value after every login. If that token has a very long lifetime, it soon becomes equivalent to the password. And warn users not to use it from shared computers.
10. Log all authentication failures in detail, including the IP address and the exact HTTP request that triggered it. This aids forensic analysis in case your application is compromised.
Implementing 'Forgot Password' feature
Which is the best method for implementing the Forgot Password feature?
1. Displaying the old password after asking a reminder question
2. Displaying a new password after the reminder question
3. Sending a temporary password by mail
4. Sending a temporary link to a ‘Change Password’ page by mail
Answer
The answer to the quiz is 4) Sending a temporary link to a ‘Change Password’ page by mail.
The challenge of a good Forgot Password feature is to prevent an attacker from stealing the password by impersonation or sniffing. So the first two options are out of question. Both are similar since in both cases the password will be displayed in clear text. They make it possible for attackers to steal passwords either by sniffing the traffic or by shoulder surfing. Also the password would have to be stored as clear text in the database and can be recovered. Passwords should be stored encrypted in the database. If the password is stored as a one way hash in the database, then it can not be recovered and can only be reset to a new a value. Now, we can reset the password to a temporary value and send this temporary password to the user by mail. Again, the attackers may obtain it by sniffing or from the mail which may be lying in the user’s mailbox for a long time.
So the most secure method of implementing this feature is to send a temporary link to a change password link by mail. The application can ask a reminder question and on getting the right answer, send a mail to the user with a link that is active only for a short time. This page will allow the user to reset the password. This way, the password can neither be sniffed or shoulder surfed. Since the link is active for only a short time, there is no risk even if the mail lies in the mailbox.
1. Displaying the old password after asking a reminder question
2. Displaying a new password after the reminder question
3. Sending a temporary password by mail
4. Sending a temporary link to a ‘Change Password’ page by mail
Answer
The answer to the quiz is 4) Sending a temporary link to a ‘Change Password’ page by mail.
The challenge of a good Forgot Password feature is to prevent an attacker from stealing the password by impersonation or sniffing. So the first two options are out of question. Both are similar since in both cases the password will be displayed in clear text. They make it possible for attackers to steal passwords either by sniffing the traffic or by shoulder surfing. Also the password would have to be stored as clear text in the database and can be recovered. Passwords should be stored encrypted in the database. If the password is stored as a one way hash in the database, then it can not be recovered and can only be reset to a new a value. Now, we can reset the password to a temporary value and send this temporary password to the user by mail. Again, the attackers may obtain it by sniffing or from the mail which may be lying in the user’s mailbox for a long time.
So the most secure method of implementing this feature is to send a temporary link to a change password link by mail. The application can ask a reminder question and on getting the right answer, send a mail to the user with a link that is active only for a short time. This page will allow the user to reset the password. This way, the password can neither be sniffed or shoulder surfed. Since the link is active for only a short time, there is no risk even if the mail lies in the mailbox.
SQL Server 2000 Best Practices
Best Practices is a great and wonderful thing … that is if everyone working on the project adheres to them.
In many of the sessions and usergroups I've been asked by many for the best or rather should I say better practices of coding while using SQL Server 2000. The views listed below are mine and I'am sure any SQL Server Guru might not argue it otherwise. Just thought of adding sections for coding practices. And here I am ... You can also consider these as guidelines for development in SQL Server. Hope you get good milage out of this article ...
Catch the first version of the article at MSDN India site. Since more practices get added I've hosted this in this site also.
Note:In this article I'll assume that you already know the T-SQL syntax and we are working with SQL Server 2000.
1. Normalize your tables
There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. And, more frequent than the inverse, the resulting design is slower. DBMS’s were designed to be used with normalized databases and SQL Server is no exception, so design with normalization in mind.
2. Avoid using cursors
Use cursors wisely. Cursors are fundamentally evil. They force the database engine to repeatedly fetch rows, negotiate blocking, manage locks, and transmit results. They consume network bandwidth as the results are transmitted back to the client, where they consume RAM, disk space, and screen real estate. Consider the resources consumed by each cursor you build and multiply this demand by the number of simultaneous users. Smaller is better. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?
Having said this the other question that comes is, If I were to use cursors then .... ? Well here are my 20Cents on cursor usage. Use the appropriate cursors for the job in hand.
· Don't use scrollable cursors unless required
· Use readonly cursors if you donot intend to update. This would be 90% of the situations.
· Try to use Forward Only cursor when using cursors
· Don’t forget to close and deallocate the cursors used.
· Try to reduce the number of columns and records fetched in a cursor
3. Index Columns
Create Index on columns that are going to be highly selective. Indexes are vital to efficient data access; however, there is a cost associated with creating and maintaining an index structure. For every insert, update and delete, each index must be updated. In a data warehouse, this is acceptable, but in a transactional database, you should weigh the cost of maintaining an index on tables that incur heavy changes. The bottom line is to use effective indexes judiciously. On analytical databases, use as many indexes as necessary to read the data quickly and efficiently.
Now a classic example is DONOT index an column like "Gender". This would have a selectivity of 50% and if your table is having 10 Million records, you can be least assured that using this index you may have to travel half the number of rows ... Hence maintaining such indexes can slow your performance.
4. Use transactions
Use transaction judiciously. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash. See that the transaction starts as late as possible and ends as early as possible. This would reduce the requirement to lock down the resources while accessing. In short,
5. Analyze deadlocks
Access your tables on the same order always. When working with stored procedures and transactions, you may find this soon. Any SQL programmer / database analyst would have come across this problem. If the order changes then there wold be a cyclic wait for resources to be released and the users would experience a permanent hang in the application. Deadlocks can be tricky to find if the lock sequence is not carefully designed. To summarize, Deadlock occurs when two users have locks on separate objects and each user is trying to lock the other user's objects. SQL Server automatically detects and breaks the deadlock. The terminated transaction will be automatically rolled back and an error code 1205 will be issued.
6. GOTO Usage
Avoid using the infamous GOTO. This is a time-proven means of adding disorder to program flow. There are some cases where intelligent use of GOTO is preferable to dogmatically refusing to use it. On the other hand, unintelligent use of GOTO is a quick ticket to unreadable code.
7. Increase timeouts
When querying a database, the default timeout is often low, like 30 seconds. Remember that report queries may run longer than this, specially when your database grows. Hence increase this value to an acceptable value.
8. Avoid NULLable columns
When possible, normalize your table and separate your nullable columns. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. It will be more flexible and faster, and will reduce the NULLable columns. I'm not saying that NULLs are the evil incarnation. I believe they can simplify coding when "missing data" is part of your business rules.
9. TEXT datatype
Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better. You can also look at the "text in row" feature with the table options for SQL Server 2000. But still I would stick to the first statement, Avoid using them on first place.
10. SELECT * Usage
Its very difficult to get out of this habit, but believe me this is very essential. Please DONOT use this syntax. Always qualify the full list of columns. Using all columns increases network traffic, requires more buffers and processing, and could prove error prone if the table or view definition changes.
11. Temporary tables usage
Unless strictly necessary. More often than not a subquery can substitute a temporary table. In SQL Server 2000, there are alternatives like the TABLE variable datatype which can provide in-memory solutions for small tables inside stored procedures too. If I were to recollect some of the advantages of using the same:
· A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared. Within its scope, a table variable may be used like a regular table.
· However, table may not be used in the following statements: INSERT INTO table_variable EXEC stored_procedure SELECT select_list INTO table_variable statements.
· Table variables are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined.
· Table variables used in stored procedures result in fewer recompilations of the stored procedures than their counterparts temporary tables.
· Transactions involving table variables last only for the duration of an update on the table variable. Thus, table variables require less locking and logging resources
12. Using UDF
UDF can replace stored procedures. But be careful in their usage. Sometimes UDFs can take a toll on your applications performance. And UDFs have to prefixed with the owners name. This is not a drawback but a requirement. I support usage of SPs more than UDFs.
13. Multiple User Scenario
Sometimes two users will edit the same record at the same time. While writing back, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. Code for these practical situations and test your application for these scenarios.
14. Use SCOPE_IDENTITY
Dont do SELECT max(ID) from MasterTable when inserting in a Details table. This is a common mistake, and will fail when concurrent users are inserting data at the same instance. Use one of SCOPE_IDENTITY or IDENT_CURRENT. My choice would be SCOPE_IDENTITY as this would give you the identity value from the current context in prespective.
15. Analyze Query Plans
The SQL Server query analyzer is a powerful tool. And surely is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it. Understand the execution plan that the execution plan window shows for potential bottlenecks.
16. Parameterized queries
Parameterize all your queries using the sp_executesql. This would help the optimzer to chace the execution plans and use the same when requested teh second time. You can cache-in the time required to parse, compile and place the execution plan. Avoid using of D-SQL as much as possible.
17. Keep Procedures Small
Keep SPs small in size and scope. Two users invoking the same stored procedure simultaneously will cause the procedure to create two query plans in cache. It is much more efficient to have a stored procedure call other ones then to have one large procedure.
18. Bulk INSERT
Use DTS or the BCP utility and you'll have both a flexible and fast solution. Try avoiding use of Insert statement for the Buld loading feature, they are not efficent and are not designed for the same.
19. Using JOINS
Make sure that there are n-1 join criteria if there are n tables.
Make sure that ALL tables included in the statement are joined. Make sure that only tables that
· Have columns in the select clause
· Have columns referenced in the where clause
· Allow two unrelated tables to be joined together are included.
20. Trap Errors
Make sure that the @@ERROR global variable is checked after every statement which causes an update to the database (INSERT, UPDATE, DELETE). Make sure that rollbacks (if appropriate) are performed prior to inserting rows into an exception table
21. Small Result Set
Retrieving needlessly large result sets (for example, thousands of rows) for browsing on the client adds CPU and network I/O load, makes the application less capable of remote use, and limits multi-user scalability. It is better to design the application to prompt the user for sufficient input so queries are submitted that generates modest result sets.
22. Negative Arguments
Minimize the use of not equal operations, <> or !=. SQL Server has to scan a table or index to find all values to see if they are not equal to the value given in the expression. Try rephrasing the expression using ranges:
WHERE KeyColumn < 'TestValue' AND KeyColumn > 'TestValue'
23. Date Assumption
Prevent issues with the interpretation of centuries in dates, do not specify years using two digits. Assuming dates formats is the first place to break an application. Hence avoid making this assumption.
24. SP_ Name
DONOT start the name of a stored procedure with SP_. This is because all the system related stored procedures follow this convention. Hence a valid procedure today may clash with the naming convention of a system procedure that gets bundled with a Service pack / Security patch tomorrow. Hence do not follow this convention.
25. Apply the latest Security Packs / Service Packs
Even though this point applies to the network and the database administrators, it is always better to keep up-to date on the software’s. With the "slammer" virus and many more still outside, it is one of the best practices to be up-to date on the same. Consider this strongly.
26. Using Count(*)
The only 100 percent accurate way to check the number of rows in a table is to use a COUNT(*) operation. The statement might consume significant resources if your tables are very big because scanning a large table or index can consume a lot of I/O. Avoid these type of queries to the maximum. Use short circuting methods as EXISTS etc. Here is one other way you can find the total number of rows in a table. SQL Server Books Online (BOL) documents the structure of sysindexes; the value of sysindexes.indid will always be 0 for a table and 1 for a clustered index. If a table doesn't have a clustered index, its entry in sysindexes will always have an indid value of 0. If a table does have a clustered index, its entry in sysindexes will always have an indid value of 1.
SELECT object_name(id) ,rowcnt FROM sysindexes WHERE indid IN (1,0) AND OBJECTPROPERTY(id, 'IsUserTable') = 1
27. Ownership Chaining
Try using this feature (available from SQL Server 2000 SP3), for permission management within a single database. Avoid using this feature to manage permissions across database.
28. SQL Injection
Security has been a prime concern for everyone. Hence validate all the incoming parameters at all levels of the application. Limit the scope of possible damage by permitting only minimally privileged accounts to send user input to the server. Adding to it, run SQL Server itself with the least necessary privileges.
29. Fill-factor
The 'fill factor' option specifies how full SQL Server will make each index page. When there is no free space to insert new row on the index page, SQL Server will create new index page and transfer some rows from the previous page to the new one. This operation is called page splits. You can reduce the number of page splits by setting the appropriate fill factor option to reserve free space on each index page. The fill factor is a value from 1 through 100 that specifies the percentage of the index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor value of 100, the difference in that SQL Server leaves some space within the upper level of the index tree for FILLFACTOR = 0. The fill factor percentage is used only at the time the index is created. If the table contains read-only data (or data that very rarely changed), you can set the 'fill factor' option to 100. When the table's data modified very often, you can decrease the 'fill factor' option to 70 percent, for example. Having explained page splits in detail I would warn you in over looking at this point because more free space means that SQL Server has to traverse through more pages to get the same amount of data. Hence try to strike a balance and arrive at an appropriate value.
30. Start-up Procedures
Verify all the stored procedures for safety reasons.
31. Analyze Blocking
More often than not any implementers nightmare would be to see a blocking process. Blocking occurs when a process must wait for another process to complete. The process must wait because the resources it needs are exclusively used by another process. A blocked process will resume operation after the resources are released by the other process. Sometimes this can become cyclic and the system comes to a stand still. The only solution is to analyze your indexing strategy and table design. Consider these points strongly.
32. Avoid Un-necessary Indexes
Avoid creating un-necessary indexes on table thinking they would improve your performance. Understand that creating Indexes and maintaining them are overheads that you incur. And these surely do reduce the throughput for the whole application. You can create a simple test on a large table and find it for yourself how multiple indexes on the same column decrease performance.
33. Consider Indexed Views
Sometimes we would require an view to be indexed. This feature is bundled with SQL Server 2000. The result set of the indexed view is persist in the database and indexed for fast access. Because indexed views depend on base tables, you should create indexed views with SCHEMABINDING option to prevent the table or column modification that would invalidate the view. Hence using them can reduce a lot of load on the base tables but increases the maintainability.
34. WITH SORT_IN_TEMPDB Option
Consider using this option when you create an index and when tempdb is on a different set of disks than the user database. This is more of a tuning recommendation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index. Time is precious, disk is cheaper.
35. Reduce Number of Columns
Try to reduce the number of columns in a table. The fewer the number of columns in a table, the less space the table will use, since more rows will fit on a single data page, and less I/O overhead will be required to access the table's data. This should be considered strongly by applications that talk across different machines. More the unwanted data passed more is the network latency observed.
In many of the sessions and usergroups I've been asked by many for the best or rather should I say better practices of coding while using SQL Server 2000. The views listed below are mine and I'am sure any SQL Server Guru might not argue it otherwise. Just thought of adding sections for coding practices. And here I am ... You can also consider these as guidelines for development in SQL Server. Hope you get good milage out of this article ...
Catch the first version of the article at MSDN India site. Since more practices get added I've hosted this in this site also.
Note:In this article I'll assume that you already know the T-SQL syntax and we are working with SQL Server 2000.
1. Normalize your tables
There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. And, more frequent than the inverse, the resulting design is slower. DBMS’s were designed to be used with normalized databases and SQL Server is no exception, so design with normalization in mind.
2. Avoid using cursors
Use cursors wisely. Cursors are fundamentally evil. They force the database engine to repeatedly fetch rows, negotiate blocking, manage locks, and transmit results. They consume network bandwidth as the results are transmitted back to the client, where they consume RAM, disk space, and screen real estate. Consider the resources consumed by each cursor you build and multiply this demand by the number of simultaneous users. Smaller is better. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?
Having said this the other question that comes is, If I were to use cursors then .... ? Well here are my 20Cents on cursor usage. Use the appropriate cursors for the job in hand.
· Don't use scrollable cursors unless required
· Use readonly cursors if you donot intend to update. This would be 90% of the situations.
· Try to use Forward Only cursor when using cursors
· Don’t forget to close and deallocate the cursors used.
· Try to reduce the number of columns and records fetched in a cursor
3. Index Columns
Create Index on columns that are going to be highly selective. Indexes are vital to efficient data access; however, there is a cost associated with creating and maintaining an index structure. For every insert, update and delete, each index must be updated. In a data warehouse, this is acceptable, but in a transactional database, you should weigh the cost of maintaining an index on tables that incur heavy changes. The bottom line is to use effective indexes judiciously. On analytical databases, use as many indexes as necessary to read the data quickly and efficiently.
Now a classic example is DONOT index an column like "Gender". This would have a selectivity of 50% and if your table is having 10 Million records, you can be least assured that using this index you may have to travel half the number of rows ... Hence maintaining such indexes can slow your performance.
4. Use transactions
Use transaction judiciously. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash. See that the transaction starts as late as possible and ends as early as possible. This would reduce the requirement to lock down the resources while accessing. In short,
5. Analyze deadlocks
Access your tables on the same order always. When working with stored procedures and transactions, you may find this soon. Any SQL programmer / database analyst would have come across this problem. If the order changes then there wold be a cyclic wait for resources to be released and the users would experience a permanent hang in the application. Deadlocks can be tricky to find if the lock sequence is not carefully designed. To summarize, Deadlock occurs when two users have locks on separate objects and each user is trying to lock the other user's objects. SQL Server automatically detects and breaks the deadlock. The terminated transaction will be automatically rolled back and an error code 1205 will be issued.
6. GOTO Usage
Avoid using the infamous GOTO. This is a time-proven means of adding disorder to program flow. There are some cases where intelligent use of GOTO is preferable to dogmatically refusing to use it. On the other hand, unintelligent use of GOTO is a quick ticket to unreadable code.
7. Increase timeouts
When querying a database, the default timeout is often low, like 30 seconds. Remember that report queries may run longer than this, specially when your database grows. Hence increase this value to an acceptable value.
8. Avoid NULLable columns
When possible, normalize your table and separate your nullable columns. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. It will be more flexible and faster, and will reduce the NULLable columns. I'm not saying that NULLs are the evil incarnation. I believe they can simplify coding when "missing data" is part of your business rules.
9. TEXT datatype
Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better. You can also look at the "text in row" feature with the table options for SQL Server 2000. But still I would stick to the first statement, Avoid using them on first place.
10. SELECT * Usage
Its very difficult to get out of this habit, but believe me this is very essential. Please DONOT use this syntax. Always qualify the full list of columns. Using all columns increases network traffic, requires more buffers and processing, and could prove error prone if the table or view definition changes.
11. Temporary tables usage
Unless strictly necessary. More often than not a subquery can substitute a temporary table. In SQL Server 2000, there are alternatives like the TABLE variable datatype which can provide in-memory solutions for small tables inside stored procedures too. If I were to recollect some of the advantages of using the same:
· A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared. Within its scope, a table variable may be used like a regular table.
· However, table may not be used in the following statements: INSERT INTO table_variable EXEC stored_procedure SELECT select_list INTO table_variable statements.
· Table variables are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined.
· Table variables used in stored procedures result in fewer recompilations of the stored procedures than their counterparts temporary tables.
· Transactions involving table variables last only for the duration of an update on the table variable. Thus, table variables require less locking and logging resources
12. Using UDF
UDF can replace stored procedures. But be careful in their usage. Sometimes UDFs can take a toll on your applications performance. And UDFs have to prefixed with the owners name. This is not a drawback but a requirement. I support usage of SPs more than UDFs.
13. Multiple User Scenario
Sometimes two users will edit the same record at the same time. While writing back, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. Code for these practical situations and test your application for these scenarios.
14. Use SCOPE_IDENTITY
Dont do SELECT max(ID) from MasterTable when inserting in a Details table. This is a common mistake, and will fail when concurrent users are inserting data at the same instance. Use one of SCOPE_IDENTITY or IDENT_CURRENT. My choice would be SCOPE_IDENTITY as this would give you the identity value from the current context in prespective.
15. Analyze Query Plans
The SQL Server query analyzer is a powerful tool. And surely is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it. Understand the execution plan that the execution plan window shows for potential bottlenecks.
16. Parameterized queries
Parameterize all your queries using the sp_executesql. This would help the optimzer to chace the execution plans and use the same when requested teh second time. You can cache-in the time required to parse, compile and place the execution plan. Avoid using of D-SQL as much as possible.
17. Keep Procedures Small
Keep SPs small in size and scope. Two users invoking the same stored procedure simultaneously will cause the procedure to create two query plans in cache. It is much more efficient to have a stored procedure call other ones then to have one large procedure.
18. Bulk INSERT
Use DTS or the BCP utility and you'll have both a flexible and fast solution. Try avoiding use of Insert statement for the Buld loading feature, they are not efficent and are not designed for the same.
19. Using JOINS
Make sure that there are n-1 join criteria if there are n tables.
Make sure that ALL tables included in the statement are joined. Make sure that only tables that
· Have columns in the select clause
· Have columns referenced in the where clause
· Allow two unrelated tables to be joined together are included.
20. Trap Errors
Make sure that the @@ERROR global variable is checked after every statement which causes an update to the database (INSERT, UPDATE, DELETE). Make sure that rollbacks (if appropriate) are performed prior to inserting rows into an exception table
21. Small Result Set
Retrieving needlessly large result sets (for example, thousands of rows) for browsing on the client adds CPU and network I/O load, makes the application less capable of remote use, and limits multi-user scalability. It is better to design the application to prompt the user for sufficient input so queries are submitted that generates modest result sets.
22. Negative Arguments
Minimize the use of not equal operations, <> or !=. SQL Server has to scan a table or index to find all values to see if they are not equal to the value given in the expression. Try rephrasing the expression using ranges:
WHERE KeyColumn < 'TestValue' AND KeyColumn > 'TestValue'
23. Date Assumption
Prevent issues with the interpretation of centuries in dates, do not specify years using two digits. Assuming dates formats is the first place to break an application. Hence avoid making this assumption.
24. SP_ Name
DONOT start the name of a stored procedure with SP_. This is because all the system related stored procedures follow this convention. Hence a valid procedure today may clash with the naming convention of a system procedure that gets bundled with a Service pack / Security patch tomorrow. Hence do not follow this convention.
25. Apply the latest Security Packs / Service Packs
Even though this point applies to the network and the database administrators, it is always better to keep up-to date on the software’s. With the "slammer" virus and many more still outside, it is one of the best practices to be up-to date on the same. Consider this strongly.
26. Using Count(*)
The only 100 percent accurate way to check the number of rows in a table is to use a COUNT(*) operation. The statement might consume significant resources if your tables are very big because scanning a large table or index can consume a lot of I/O. Avoid these type of queries to the maximum. Use short circuting methods as EXISTS etc. Here is one other way you can find the total number of rows in a table. SQL Server Books Online (BOL) documents the structure of sysindexes; the value of sysindexes.indid will always be 0 for a table and 1 for a clustered index. If a table doesn't have a clustered index, its entry in sysindexes will always have an indid value of 0. If a table does have a clustered index, its entry in sysindexes will always have an indid value of 1.
SELECT object_name(id) ,rowcnt FROM sysindexes WHERE indid IN (1,0) AND OBJECTPROPERTY(id, 'IsUserTable') = 1
27. Ownership Chaining
Try using this feature (available from SQL Server 2000 SP3), for permission management within a single database. Avoid using this feature to manage permissions across database.
28. SQL Injection
Security has been a prime concern for everyone. Hence validate all the incoming parameters at all levels of the application. Limit the scope of possible damage by permitting only minimally privileged accounts to send user input to the server. Adding to it, run SQL Server itself with the least necessary privileges.
29. Fill-factor
The 'fill factor' option specifies how full SQL Server will make each index page. When there is no free space to insert new row on the index page, SQL Server will create new index page and transfer some rows from the previous page to the new one. This operation is called page splits. You can reduce the number of page splits by setting the appropriate fill factor option to reserve free space on each index page. The fill factor is a value from 1 through 100 that specifies the percentage of the index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor value of 100, the difference in that SQL Server leaves some space within the upper level of the index tree for FILLFACTOR = 0. The fill factor percentage is used only at the time the index is created. If the table contains read-only data (or data that very rarely changed), you can set the 'fill factor' option to 100. When the table's data modified very often, you can decrease the 'fill factor' option to 70 percent, for example. Having explained page splits in detail I would warn you in over looking at this point because more free space means that SQL Server has to traverse through more pages to get the same amount of data. Hence try to strike a balance and arrive at an appropriate value.
30. Start-up Procedures
Verify all the stored procedures for safety reasons.
31. Analyze Blocking
More often than not any implementers nightmare would be to see a blocking process. Blocking occurs when a process must wait for another process to complete. The process must wait because the resources it needs are exclusively used by another process. A blocked process will resume operation after the resources are released by the other process. Sometimes this can become cyclic and the system comes to a stand still. The only solution is to analyze your indexing strategy and table design. Consider these points strongly.
32. Avoid Un-necessary Indexes
Avoid creating un-necessary indexes on table thinking they would improve your performance. Understand that creating Indexes and maintaining them are overheads that you incur. And these surely do reduce the throughput for the whole application. You can create a simple test on a large table and find it for yourself how multiple indexes on the same column decrease performance.
33. Consider Indexed Views
Sometimes we would require an view to be indexed. This feature is bundled with SQL Server 2000. The result set of the indexed view is persist in the database and indexed for fast access. Because indexed views depend on base tables, you should create indexed views with SCHEMABINDING option to prevent the table or column modification that would invalidate the view. Hence using them can reduce a lot of load on the base tables but increases the maintainability.
34. WITH SORT_IN_TEMPDB Option
Consider using this option when you create an index and when tempdb is on a different set of disks than the user database. This is more of a tuning recommendation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index. Time is precious, disk is cheaper.
35. Reduce Number of Columns
Try to reduce the number of columns in a table. The fewer the number of columns in a table, the less space the table will use, since more rows will fit on a single data page, and less I/O overhead will be required to access the table's data. This should be considered strongly by applications that talk across different machines. More the unwanted data passed more is the network latency observed.
Monday, March 5, 2007
Top 10 SQL Server tips of 2005
We've tallied the top 10 SQL Server tips of 2005 according to your visits. While SQL Server 2005 introductions piqued your interest, it's tips on those nagging and troublesome administration basics that garnered the most attention. View the top 10 below or check out our complete tips collection.
#1 - Restoring a database from another SQL Server
Restoring a database from another SQL Server is simple -- matching up the logins and users again is not. Get the steps you need to restore one database from another in this tip.
#2 - Hacker's-eye view of SQL Server
This tip outlines the four primary methods used to hack into SQL Server and offers defenses to prevent such security breaches.
#3 - SQL Server performance-tuning worst practices
This tip addresses a number of the worst practices for SQL Server performance, with recommendations for correcting these practices and improving overall system performance.
#4 - Selecting a SQL Server recovery model
SQL Server 2000 offers three recovery models: Full Recovery, Simple Recovery and Bulk-Logged Recovery. This tip explains how to choose the best one for your needs.
#5 - Selecting a SQL Server backup model
This tip looks at several components to help define a backup strategy as well as the different backup options that are available in SQL Server 2000.
#6 - Stored procedure: List database objects by selected type(s)
Here's a simple stored procedure to return a list of all the objects of selected types in the current database.
#7 - Top 10 new features in SQL Server 2005
From programming to administrative capabilities, SQL Server 2005 enhances many existing SQL Server 2000 features and goes well beyond. Learn about key enhancements in this tip.
#8 - 15 SQL Server replication tips in 15 minutes
This list of quick tips and tricks will help you enhance your replication techniques. It is broken up into three groups: performance, monitoring and miscellaneous.
#9 - Maintenance checks for SQL Server
SQL Server's maintenance plan wizard can automate all maintenance tasks for you -- but beware what's really going on behind the scenes. This list of activities will help keep your SQL Server up to par.
#10 - SQL Server backup and restore, part I: The basics
This indepth tip analyzes the various options available for your backup and recovery process with SQL Server 2000.
#1 - Restoring a database from another SQL Server
Restoring a database from another SQL Server is simple -- matching up the logins and users again is not. Get the steps you need to restore one database from another in this tip.
#2 - Hacker's-eye view of SQL Server
This tip outlines the four primary methods used to hack into SQL Server and offers defenses to prevent such security breaches.
#3 - SQL Server performance-tuning worst practices
This tip addresses a number of the worst practices for SQL Server performance, with recommendations for correcting these practices and improving overall system performance.
#4 - Selecting a SQL Server recovery model
SQL Server 2000 offers three recovery models: Full Recovery, Simple Recovery and Bulk-Logged Recovery. This tip explains how to choose the best one for your needs.
#5 - Selecting a SQL Server backup model
This tip looks at several components to help define a backup strategy as well as the different backup options that are available in SQL Server 2000.
#6 - Stored procedure: List database objects by selected type(s)
Here's a simple stored procedure to return a list of all the objects of selected types in the current database.
#7 - Top 10 new features in SQL Server 2005
From programming to administrative capabilities, SQL Server 2005 enhances many existing SQL Server 2000 features and goes well beyond. Learn about key enhancements in this tip.
#8 - 15 SQL Server replication tips in 15 minutes
This list of quick tips and tricks will help you enhance your replication techniques. It is broken up into three groups: performance, monitoring and miscellaneous.
#9 - Maintenance checks for SQL Server
SQL Server's maintenance plan wizard can automate all maintenance tasks for you -- but beware what's really going on behind the scenes. This list of activities will help keep your SQL Server up to par.
#10 - SQL Server backup and restore, part I: The basics
This indepth tip analyzes the various options available for your backup and recovery process with SQL Server 2000.
.NET Framework 3.0 Virtual Labs
Ever wanted to test software in a sandbox environment? Wouldn't it be great to be able to test the new Microsoft .NET Framework 3.0 technologies immediately, without dedicating one or more computers to the project? Now you can, with Microsoft Virtual Labs.
It's simple - no complex setup or installation is required to try out new features running in the Virtual Lab. You get a downloadable manual and a 90-minute block of time for each module. You can sign up for additional 90-minute blocks anytime. As part of the Virtual Lab, you'll have full access to Microsoft .NET Framework 3.0 through the following modules:
Understanding Windows Communication Foundation
After completing this lab, you will be better able to:
Demonstrate the capabilities and tools of Windows Communication Foundation.
Core Features of Windows CardSpace
After completing this lab, you will be better able to:
Use the Windows CardSpace Control Panel applet
Build a simple WCF application
Add Information Card support to the WCF application
Process claims using System.IdentityModel
Add Information Cards to Browser Applications
Building Windows Presentation Foundation Applications C# Part 1
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications C# Part 2
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications VB Part 1
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications VB Part 2
It's simple - no complex setup or installation is required to try out new features running in the Virtual Lab. You get a downloadable manual and a 90-minute block of time for each module. You can sign up for additional 90-minute blocks anytime. As part of the Virtual Lab, you'll have full access to Microsoft .NET Framework 3.0 through the following modules:
Understanding Windows Communication Foundation
After completing this lab, you will be better able to:
Demonstrate the capabilities and tools of Windows Communication Foundation.
Core Features of Windows CardSpace
After completing this lab, you will be better able to:
Use the Windows CardSpace Control Panel applet
Build a simple WCF application
Add Information Card support to the WCF application
Process claims using System.IdentityModel
Add Information Cards to Browser Applications
Building Windows Presentation Foundation Applications C# Part 1
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications C# Part 2
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications VB Part 1
After completing this lab, you will be better able to:
Develop a basic WPF application
Use a NavigationWindow and page functions to create a wizard
Learn how Property Bags are used
Building Windows Presentation Foundation Applications VB Part 2
.NET Framework 3.0 has been released!
The .NET Framework 3.0 has officially been released! You can download the .NET Framework 3.0 components here:
.NET Framework 3.0 Runtime Components
Windows SDK for Vista and the .NET Framework 3.0
Visual Studio 2005 Extensions for .NET Framework 3.0 (Windows Workflow Foundation)
Visual Studio 2005 Extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP
Note, if you are using Windows Vista the .NET Framework 3.0 Runtime Components are installed by default.
The Readme for the released version of the .NET Framework 3.0 is available here. If you have a previous CTP installed, please be sure to review the uninstall instructions.
Useful links:
http://wcf.netfx3.com/ - Windows Communication Foundation
http://wpf.netfx3.com/ - Windows Presentation Foundation
http://wf.netfx3.com/ - Windows Workflow Foundation
http://cardspace.netfx3.com/ - Windows CardSpace
.NET Framework 3.0 Runtime Components
Windows SDK for Vista and the .NET Framework 3.0
Visual Studio 2005 Extensions for .NET Framework 3.0 (Windows Workflow Foundation)
Visual Studio 2005 Extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP
Note, if you are using Windows Vista the .NET Framework 3.0 Runtime Components are installed by default.
The Readme for the released version of the .NET Framework 3.0 is available here. If you have a previous CTP installed, please be sure to review the uninstall instructions.
Useful links:
http://wcf.netfx3.com/ - Windows Communication Foundation
http://wpf.netfx3.com/ - Windows Presentation Foundation
http://wf.netfx3.com/ - Windows Workflow Foundation
http://cardspace.netfx3.com/ - Windows CardSpace
Subscribe to:
Posts (Atom)