This book can be downloaded from Phil Levis's website.
1. It's better to name components with C/P as an end. The C stands for "component", and the P stands for "private", which should not be used directly.
2. An module should implement: event of used interfaces and command of provided interfaces as well.
3. TMilli timer fires 1024 times per second, instead of 1000 times. It's due to the fact that many microcontrollers do not have the ability to count at 1kHz accurately, but they can count at 1024Hz accurately.
4. To avoid recursions, split-phase commands must never directly signal their callbacks.
5. Use enum to define constants, such as
enum {
MAX = 2;
};
because it can both save memory and improve performance.
But do not declare enum with an enum type like
typedef enum {
s1 = 0,
s2 = 1,
s3 = 2,
s4 = 3
} state;
since it consumes 16 bytes of memory, while actually only 1 byte is needed.
6. Prefixes "nx_" and "nxle_" provide platform independent types. "nx_" stands for big-endian, and "nxle_" stands for little-endian. For instance, "nx_uint8_t" is a big-endian 8-bit vector.
Note that if the program have to perform significant computation on a platform independent type, or access it many times, it's better to temporarily convert to a native type, such as
nx_uint16_t x = 5;
uint16_t y = x;
7. (a little bit confused) Auto initialize and re-initialization.
8. Don't signal events from commands. The commands should post a task that signal the event, i.e. instead of
command error_t Read.read() {
signal Read.readDone(SUCCESS, val);
}
event void Read.readDone() {
buffer[index] = val;
index ++;
if (index < BUFFER_SIZE)
call Read.read();
}
use
command error_t Read.read() {
post readDoneTask();
return SUCCESS;
}
task void readDoneTask() {
signal Read.readDone(SUCCESS, val);
}
Monday, January 25, 2010
Sunday, January 24, 2010
How to install TinyOS 2 on MacOSX 10.6 (Snow Leopard)
XCode ( install from either DVD or http://developer.apple.com/tools/xcode/ )
1. Installing MacPorts ( Darwin Ports )
Download MacPorts from http://www.macports.org/ and install
2. Installing NesC and avr/msp430 tools
There is something incompatible with SL. (Actually, it is a script error.)
- cd /Users/user and run git clone git://hinrg.cs.jhu.edu/git/ports.git
- Edit /opt/local/etc/macports/sources.conf to include a line: file:///Users/user/ports (This line should be put before all other lines)
- sudo port install msp430-binutils-tinyos msp430-gcc-tinyos msp430-libc-tinyos (for MSP430)
- sudo port install avr-binutils-tinyos avr-gcc-tinyos avr-libc-tinyos avrdude-tinyos (for AVR)
3. Installing FTDI Drivers
Download and install drivers from http://www.ftdichip.com/Drivers/VCP.htm
4. Installing tinyos2.x
Enter your work directory and run git clone git://hinrg.cs.jhu.edu/git/tinyos-2.x.git
This copy is imported from TinyOS CVS and updated.
Then we need to set environmental variables about TinyOS.
Tinyos.sh is officially provided. We can place it in tinyos source directory and add a line "source @your_path@/tinyos.sh" into the '.profile' file home directory. This script would be run any time we open a new terminal. Also, remember to replace "@your_path@" with real path both in tinyos.sh and this line of code.
5. Installing the tinyos2.x toolset
Run commands like following:
- cd tools
- ./Bootstrap
- ./configure
- make
- sudo make install
- sudo tos-install-jni
- cd $TOSROOT/support/sdk/java
- make
In this step we may encounter several compiling errors.
a)The first one is "file not found: PrintMsg.java"
I don't know why, but thefile "support\sdk\java\net\tinyos\message\SerialPacket.java" is missing in the newest version of TinyOS
Just copy it from and an older repository and run again.
b)The second one is "/usr/include/stdlib.h:272 syntax error before '^'"
If it happens, one solution is to install gcc4.3 by macports and execute above commands again.
- sudo port install gcc43 (It may take hours...)
- sudo port install gcc_select (Gcc_select is a useful tool to readily change default gcc.)
6. Compiling an Application
- cd $TOSROOT/apps/Blink
- make telosb
Finally, we come to the end.
Of course you can try more applications as described in official website.
Reference :
3. A large amount of helpful resources from google.
Saturday, November 7, 2009
Postgresql - Storage Level
1.Page
Page is the fundamental storage unit of Postgre.
There is a figure in \src\include\storage\bufpage.h depicting page structure.
2.FSM
Postgre uses an additional page called "FSM"(free space mapping) page to indicate free spaces in heap pages.
Also, for each heap page, it is divided into BLOCKSIZE/256 parts, thus a byte is enough to record how much free space left in a page.
Actually, FSM is a tree structure. Suppose we have heap pages with free space values 3, 4, 0, 2. Then FSM structure is like this:
4
4 2
3 4 0 2
More details can be found in /src/backend/storage/freespace/README.
3.Virtual File Descriptor
Vfd is an wrapper of system file descriptor.
The reason to use this use such an object is that Postgre may open so many file descriptors for a variety of reasons-including base tables, sort and hash spool files, and etc-that the number of file descriptors being used is quite easy to exceed system limits.
Vfds are managed as an LRU pool, and all file descriptor operations(opening, closing and so on) should be through interfaces of vfds.
More details can be found in /src/backend/storage/file/fd.c .
Also, there is a file "buffile.c" under the same directory, which provides a very incomplete emulation of stdio atop virtual Files.
Page is the fundamental storage unit of Postgre.
There is a figure in \src\include\storage\bufpage.h depicting page structure.
2.FSM
Postgre uses an additional page called "FSM"(free space mapping) page to indicate free spaces in heap pages.
Also, for each heap page, it is divided into BLOCKSIZE/256 parts, thus a byte is enough to record how much free space left in a page.
Actually, FSM is a tree structure. Suppose we have heap pages with free space values 3, 4, 0, 2. Then FSM structure is like this:
4
4 2
3 4 0 2
More details can be found in /src/backend/storage/freespace/README.
3.Virtual File Descriptor
Vfd is an wrapper of system file descriptor.
The reason to use this use such an object is that Postgre may open so many file descriptors for a variety of reasons-including base tables, sort and hash spool files, and etc-that the number of file descriptors being used is quite easy to exceed system limits.
Vfds are managed as an LRU pool, and all file descriptor operations(opening, closing and so on) should be through interfaces of vfds.
More details can be found in /src/backend/storage/file/fd.c .
Also, there is a file "buffile.c" under the same directory, which provides a very incomplete emulation of stdio atop virtual Files.
Monday, November 2, 2009
Is the Next DBMS Revolution Looming?(zz)
By Guy Harrison - Posted Jun 15, 2008
For the first time in over 20 years, there appear to be cracks forming in the relational model’s dominance of the database management systems market. The relational database management system (RDBMS) of today is increasingly being seen as an obstacle to the IT architectures of tomorrow, and - for the first time - credible alternatives to the relational database are emerging. While it would be reckless to predict the demise of the relational database as a critical component of IT architectures, it is certainly feasible to imagine the relational database as just one of several choices for data storage in next-generation applications.
The Last DBMS Revolution
The relational database architecture became dominant throughout the 1980s in conjunction with the rise of minicomputer and client-server architectures. Client-server applications were revolutionary in terms of ease of use, functionality, development and deployment costs. The relational database also made it easier for business to access and leverage DBMS data. Business Intelligence, reporting tools and the data warehouse entrenched the value of data and helped the relational database achieve almost total dominance by the mid-1990s.
The Failed OODBMS Revolution
However, from an application developer’s point of view, the relational model was not ideal. The RDBMS came to prominence during the same period as object-oriented (OO) programming. While relational database represented data as a set of tables with regular row-column structure, OO represented data in objects that not only associated behaviors, but which also had complex internal structure. The disconnect between these two representations created an “impendence mismatch” that reduced application cohesiveness.
In an attempt to resolve this disconnect, the object-oriented database management system (OODBMS) was established. In an OODBMS, application data is represented by persistent objects that match the objects used in the programming language.
However, OODBMS failed to have a significant impact. The OO model was programmer-centric and did not address business intelligence needs. Eventually, the Internet boom rendered the issue moot and the industry standardized on the more mature RDBMS. As a workaround, many application frameworks developed object-relational mapping (ORM) schemes which allowed object-oriented access to relational data.
Enter Utility Computing
The Internet gold rush and the global Y2K effort resulted in almost budget-less funding for computer hardware, software and staffing. However, since the bursting of the bubble, the IT industry had been subjected to unrelenting pressure to reduce cost.
It had been clear for some time that the allocation and utilization of computing resources was inherently inefficient. Because each application used dedicated hardware, the hardware had to be sized to match peak application processing requirements. Off-peak, these resources were wasted.
The utility computing concept introduced the idea of allowing computing resources to be allocated on demand in much the same way a power company makes make electricity available on-demand to consumers. Such an approach could reduce cost both through economies of scale and by averaging out peak demands between applications.
Virtualization, grid computing and the Internet as a universal wide area network have combined to deliver an emerging realization of the utility vision in the shape of a computing “cloud.”
In a cloud computing configuration, application resources - or even the application itself - are made available from virtualized resources located somewhere in the Internet (e.g., in the cloud).
fig1
Figure 1 Grids, virtual servers and the cloud
RDBMS Gets in the Way Again
Most components of modern applications can be deployed to a virtualized or grid environment without significant disruption. Web servers and applications servers all cluster naturally and resources can be added or removed from these layers simply by starting or stopping members of a cluster.
Unfortunately, it’s much harder to cluster databases. In a traditional database cluster, data must either be replicated across the cluster members, or partitioned between them. In either case, adding a machine to the cluster requires data to be copied or moved to the new node. Since this data shipping is a time-consuming and expensive process, databases are unable to be dynamically and efficiently provisioned on demand.
Oracle’s attempt to build a grid database - Oracle Real Application Cluster (RAC) - is in theory capable meeting the challenges of the cloud. However, RAC is seen as being too proprietary, expensive and high maintenance by most of those trying to establish computing clouds.
Cloud Databases
For those seeking to create public computing clouds (such as Amazon) or those trying to establish massively parallel, redundant and economical data-driven applications (such as Google), relational databases became untenable. These vendors needed a way of managing data that was almost infinitely scalable, inherently reliable and cost-effective.
Google’s BigTable solution was to develop a relatively simple storage management system that could provide fast access to petabytes of data, potentially redundantly distributed across thousands of machines.
Physically, BigTable resembles a B-tree index-organized table in which branch and leaf nodes are distributed across multiple machines. Like a B-tree, nodes “split” as they grow and - since nodes are distributed - this allows for very high scalability across large numbers of machines.
fig2
Figure 2 Cloud databases distribute data across many hosts
Data elements in BigTable are identified by a primary key, column name and (optionally) a timestamp. Lookups via primary key will be predictable and relatively fast. BigTable provides the data storage mechanism for Google App Engine - Google’s cloud based application environment.
Amazon’s SimpleDB is conceptually similar to BigTable and forms a key part of the Amazon Web Services (AWS) cloud computing environment. Microsoft’s SQL Server Data Services (SSDS) provides a similar capability.
The chasm between the database management capabilities of these cloud databases and mainstream relational databases is huge. Consequently, it’s easy to dismiss their long term potential.
However, for applications already using the ORM-based frameworks, these cloud databases can easily provide core data management functionality. Furthermore, they can provide this functionality with compelling scalability and economic advantages. In short, they exhibit the familiar signature of a disruptive technology - one that provides adequate functionality together with a compelling economic advantage.
Challenges for the Cloud Database
Cloud Databases still have significant technical drawbacks. These include:
* Transactional support and referential integrity. Applications using cloud databases are largely responsible for maintaining the integrity of transactions and relationships between “tables.”
* Complex data accesses. The ORM pattern - and cloud databases - excel at single row transactions - get a row, save a row, etc. However, most non-trivial applications do have to perform joins and other operations.
* Business Intelligence. Application data has value not only in terms of powering applications, but also as information which drives business intelligence. The dilemma of the pre-relational database - in which valuable business data was locked inside of impenetrable application data stores - is not something to which business will willingly return.
Cloud databases could displace the relational database for a significant segment of next-generation, cloud-enabled applications. However, business is unlikely to be enthusiastic about an architecture that prevents application data from being leveraged for BI and decision support purposes. An architecture that delivered the scalability and other advantages of cloud databases without sacrificing information management would therefore be very appealing. In the next part of this article, we’ll look at an intriguing proposal that seems to deliver just that.
****************************************
After reading this passage, I think there are some important things that we should remember.
When developing new DBMS, products must be
1.easy to port
If it need a lot of efforts to port nowadays RDBMS to new DBMS, most people will feel reluctant to do so.
2.taking advantage of off-the-shelf products
New DBMS should never require a completely new architecture, hardware system, or any other things.Sunday, November 1, 2009
About this blog
This blog is used to record research materials, trace state-of-art techniques, and write my own ideas.
Subscribe to:
Posts (Atom)