Showing posts with label practical. Show all posts
Showing posts with label practical. Show all posts

Monday, March 26, 2012

Practical Uses for the "Embedded Code" functionality

Dear Anyone,

I understand that there an "Embedded Code" functionality in RS2005. Does anyone know any good uses for this?

Thanks,
Joseph

Normally, you'll use embedded code to create "one off" functions that are (generally) specific to a report -- Maybe something that does a specific calculation that you need which isn't directly supported by VB.NET expressions.

Using embedded code (or code in a custom assembly) gives you greater control over logic flow and alows you to do stuff like more "advanced" conditional statements and looping which you can't do in an expression you place directly behind a control in a report

People will use a custom assembly (which is then referenced by the report) in cases where the functions are reusable, generally more complex. and do things like access system resources (the file system, the network, etc.) .

|||

Is it possible to iterate through a data set in the report in embedded code. If so, how would you reference the data set.

|||No, you can't. It would be nice to be able to just assign a "reporting services dataset" to an "true" ADO.NET datatable and then just walk the table, but there is no such functionality.|||

Hi,

Dont you thik there should be a property for a report that controls how many "records" get displayed on a page?. Currently when I create a report, only 3 records show up and I get 2000 pages. Instead I would prefer 100 records on one page and fewer number of pages?

Or is there any way to do this?

Thanks.

Practical limitations of SQL Server?

Hi all,
I'm looking for some ball park estimates on when SQL Server might start
to break down, and can't find any reliable info. Any insight is appreciated.

Basically, the situation is this: The database structure is very simple;
just one table with about 15 columns and about 60-75 million rows. There's
no need for massaging data or complex relations, just simple searches on
maybe a max of 5 columns. Out of the gates we'll be looking at having 30
concurrent users and rapidly expanding to about 300-400 concurrent users.

I might need to rebuild the database on a daily or weekly basis
depending on how often changes are made to a master file. In the past I've
been bit in the butt with the absolute crappiness of SQL Server's
replication, so I'm going to try to avoid that path if I can (plus I already
have some scripts written to delete and rebuild a similar database on a
nightly basis). Would it be practical to destroy and rebuild a database this
size on a daily basis?

The big question is if searching 60-75 million records is practical in
SQL server. If so, what kind of machine would I need to get a nearly instant
response time per search (.2 second or so) when everyone's banging on it at
once? How many concurrent users can I expect to be able to practically
support before SQL Server will start to bog down? Thanks for your thoughts,

-RingoRingo (ringo@.*REMOVE*ringosoft.com) writes:
> Basically, the situation is this: The database structure is very simple;
> just one table with about 15 columns and about 60-75 million rows. There's
> no need for massaging data or complex relations, just simple searches on
> maybe a max of 5 columns. Out of the gates we'll be looking at having 30
> concurrent users and rapidly expanding to about 300-400 concurrent users.
> I might need to rebuild the database on a daily or weekly basis
> depending on how often changes are made to a master file. In the past
> I've been bit in the butt with the absolute crappiness of SQL Server's
> replication, so I'm going to try to avoid that path if I can (plus I
> already have some scripts written to delete and rebuild a similar
> database on a nightly basis). Would it be practical to destroy and
> rebuild a database this size on a daily basis?
> The big question is if searching 60-75 million records is practical in
> SQL server. If so, what kind of machine would I need to get a nearly
> instant response time per search (.2 second or so) when everyone's
> banging on it at once? How many concurrent users can I expect to be able
> to practically support before SQL Server will start to bog down?

These questions that are about impossible to answer. If you run this
database on a P100 with 128 MB, it is going do break down quite soon,
but if you run it on a huper-duper server with tons of disks space that
you have spread this single table over, then you might have a hard time
if you are set to kill SQL Server.

300-400 concurrent users does not say much. It matters a bit, if a user
submits a query once an hour, or 15 per minute.

As for the speed of loading the data, and the speed of retrieval, there
is a pay-of. For the retrieval speed you indicate, you need good indexes,
and you may need an indexed views for some searches. But many indexes
makes the load slower. You can drop the indexes when you load, but then
you still need to rebuild them.

I don't know what the "master file" you mention is, but maybe you should
reconsider using replication. I don't have much experience of SQL
Server's replication features, but I have never seen anyone to talk
about its "absolute crappiness".

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Ringo" <ringo@.*REMOVE*ringosoft.com> wrote in message
news:6HYQa.226055$_w.9147313@.twister.southeast.rr. com...
> Hi all,
> I'm looking for some ball park estimates on when SQL Server might
start
> to break down, and can't find any reliable info. Any insight is
appreciated.
> Basically, the situation is this: The database structure is very
simple;
> just one table with about 15 columns and about 60-75 million rows. There's
> no need for massaging data or complex relations, just simple searches on
> maybe a max of 5 columns. Out of the gates we'll be looking at having 30
> concurrent users and rapidly expanding to about 300-400 concurrent users.

Again, how often are they submitting queries and what sorts of queries?

If it is as straightfoward as you claim, I can't see any major problems
here. As others have pointed out, a number of us routinely have databases
will far more rows than that.

Not speaking from rowsize, but total DB size, my largest DB is 27 gig and
growing. No performance issues there.

> I might need to rebuild the database on a daily or weekly basis
> depending on how often changes are made to a master file. In the past I've
> been bit in the butt with the absolute crappiness of SQL Server's
> replication, so I'm going to try to avoid that path if I can (plus I
already
> have some scripts written to delete and rebuild a similar database on a
> nightly basis). Would it be practical to destroy and rebuild a database
this
> size on a daily basis?

Not sure how you'd be using replication in a scenario like this. But, I've
got some experience with replication and can always help you out there. We
routinely use replication at my job w/o too much trouble.

If you're doing a complete rebuild though, BCP is probably the way to go.

> The big question is if searching 60-75 million records is practical in
> SQL server. If so, what kind of machine would I need to get a nearly
instant
> response time per search (.2 second or so) when everyone's banging on it
at
> once? How many concurrent users can I expect to be able to practically
> support before SQL Server will start to bog down? Thanks for your
thoughts,

Index the stuff.
Make sure you re-use connections. (We found in an application that
opening/closing the ODBC connection was 10x more expensive than the query
itself.)
Give us more data
Look at Kalen Delany's "Insider SQL 2000" for more details.
And there's a good MS Press book on optimizing SQL Server.

> -Ringo|||maybe you can split the data in the database and retrieve it with
select incombination woth UNION. success

"Ringo" <ringo@.*REMOVE*ringosoft.com> wrote in message news:<6HYQa.226055$_w.9147313@.twister.southeast.rr.com>...
> Hi all,
> I'm looking for some ball park estimates on when SQL Server might start
> to break down, and can't find any reliable info. Any insight is appreciated.
> Basically, the situation is this: The database structure is very simple;
> just one table with about 15 columns and about 60-75 million rows. There's
> no need for massaging data or complex relations, just simple searches on
> maybe a max of 5 columns. Out of the gates we'll be looking at having 30
> concurrent users and rapidly expanding to about 300-400 concurrent users.
> I might need to rebuild the database on a daily or weekly basis
> depending on how often changes are made to a master file. In the past I've
> been bit in the butt with the absolute crappiness of SQL Server's
> replication, so I'm going to try to avoid that path if I can (plus I already
> have some scripts written to delete and rebuild a similar database on a
> nightly basis). Would it be practical to destroy and rebuild a database this
> size on a daily basis?
> The big question is if searching 60-75 million records is practical in
> SQL server. If so, what kind of machine would I need to get a nearly instant
> response time per search (.2 second or so) when everyone's banging on it at
> once? How many concurrent users can I expect to be able to practically
> support before SQL Server will start to bog down? Thanks for your thoughts,
> -Ringo