Implementing Kernel Object Type (Part 2)

In Part 1 we’ve seen how to create a new kernel object type. The natural next step is to implement some functionality associated with the new object type. Before we dive into that, let’s take a broader view of what we’re trying to do. For comparison purposes, we can take an existing kernel object type, such as a Semaphore or a Section, or any other object type, look at how it’s “invoked” to get an idea of what we need to do.

A word of warning: this is a code-heavy post, and assumes the reader is fairly familiar with Win32 and native API conventions, and has basic understanding of device driver writing.

The following diagram shows the call flow when creating a semaphore from user mode starting with the CreateSemaphore(Ex) API:

A process calls the officially documented CreateSemaphore, implemented in kernel32.dll. This calls the native (undocumented) API NtCreateSemaphore, converting arguments as needed from Win32 conventions to native conventions. NtCreateSemaphore has no “real” implementation in user mode, as the kernel is the only one which can create a semaphore (or any other kernel object for that matter). NtDll has code to transition the CPU to kernel mode by using the syscall machine instruction on x64. Before issuing a syscall, the code places a number into the EAX CPU register. This number – system service index, indicates what operation is being requested.

On the kernel side of things, the System Service Dispatcher uses the value in EAX as an index into the System Service Descriptor Table (SSDT) to locate the actual function to call, pointing to the real NtCreateSemaphore implementation. Semaphores are relatively simple objects, so creation is a matter of allocating memory for a KSEMAPHORE structure (and a header), done with OnCreateObject, initializing the structure, and then inserting the object into the system (ObInsertObject).

More complex objects are created similarly, although the actual creation code in the kernel may be more elaborate. Here is a similar diagram for creating a Section object:

As can be seen in the diagram, creating a section involves a private function (MiCreateSection), but the overall process is the same.

We’ll try to mimic creating a DataStack object in a similar way. However, extending NtDll for our purposes is not an option. Even using syscall to make the transition to the kernel is problematic for the following reasons:

  • There is no entry in the SSDT for something like NtCreateDataStack, and we can’t just add an entry because PatchGuard does not like when the SSDT changes.
  • Even if we could add an entry to the SSDT safely, the entry itself is tricky. On x64, it’s not a 64-bit address. Instead, it’s a 28-bit offset from the beginning of the SSDT (the lower 4 bits store the number of parameters passed on the stack), which means the function cannot be too far from the SSDT’s address. Our driver can be loaded to any address, so the offset to anything mapped may be too large to be stored in an SSDT entry.
  • We could fix that problem perhaps by adding code in spare bytes at the end of the kernel mapped PE image, and add a JMP trampoline call to our real function…

Not easy, and we still have the PatchGuard issue. Instead, we’ll go about it in a simpler way – use DeviceIoControl (or the native NtDeviceIoControlFile) to pass the parameters to our driver. The following diagram illustrates this:

We’ll keep the “Win32 API” functions and “Native APIs” implemented in the same DLL for convenience. Let’s from the top, moving from user space to kernel space. Implementing CreateDataStack involves converting Win32 style arguments to native-style arguments before calling NtCreateDataStack. Here is the beginning:

HANDLE CreateDataStack(_In_opt_ SECURITY_ATTRIBUTES* sa, 
    _In_ ULONG maxItemSize, _In_ ULONG maxItemCount, 
    _In_ ULONG_PTR maxSize, _In_opt_ PCWSTR name) {

Notice the similarity to functions like CreateSemaphore, CreateMutex, CreateFileMapping, etc. An optional name is accepted, as DataStack objects can be named.

Native APIs work with UNICODE_STRINGs and OBJECT_ATTRIBUTES, so we need to do some work to be able to call the native API:

NTSTATUS NTAPI NtCreateDataStack(_Out_ PHANDLE DataStackHandle, 
    _In_opt_ POBJECT_ATTRIBUTES DataStackAttributes, 
    _In_ ULONG MaxItemSize, _In_ ULONG MaxItemCount, ULONG_PTR MaxSize);

We start by building an OBJECT_ATTRIBUTES:

UNICODE_STRING uname{};
if (name && *name) {
	RtlInitUnicodeString(&uname, name);
}
OBJECT_ATTRIBUTES attr;
InitializeObjectAttributes(&attr, 
	uname.Length ? &uname : nullptr, 
	OBJ_CASE_INSENSITIVE | (sa && sa->bInheritHandle ? OBJ_INHERIT : 0) | (uname.Length ? OBJ_OPENIF : 0),
	uname.Length ? GetUserDirectoryRoot() : nullptr, 
	sa ? sa->lpSecurityDescriptor : nullptr);

If a name exists, we wrap it in a UNICODE_STRING. The security attributes are used, if provided. The most interesting part is the actual name (if provided). When calling a function like the following:

CreateSemaphore(nullptr, 100, 100, L"MySemaphore");

The object name is not going to be just “MySemaphore”. Instead, it’s going to be something like “\Sessions\1\BaseNamedObjects\MySemaphore”. This is because the Windows API uses “local” session-relative names by default. Our DataStack API should provide the same semantics, which means the base directory in the Object Manager’s namespace for the current session must be used. This is the job of GetUserDirectoryRoot. Here is one way to implement it:

HANDLE GetUserDirectoryRoot() {
	static HANDLE hDir;
	if (hDir)
		return hDir;

	DWORD session = 0;
	ProcessIdToSessionId(GetCurrentProcessId(), &session);

	UNICODE_STRING name;
	WCHAR path[256];
	if (session == 0)
		RtlInitUnicodeString(&name, L"\\BaseNamedObjects");
	else {
		wsprintfW(path, L"\\Sessions\\%u\\BaseNamedObjects", session);
		RtlInitUnicodeString(&name, path);
	}
	OBJECT_ATTRIBUTES dirAttr;
	InitializeObjectAttributes(&dirAttr, &name, OBJ_CASE_INSENSITIVE, nullptr, nullptr);
	NtOpenDirectoryObject(&hDir, DIRECTORY_QUERY, &dirAttr);
	return hDir;
}

We just need to do that once, since the resulting directory handle can be stored in a global/static variable for the lifetime of the process; we won’t even bother closing the handle. The native NtOpenDirectoryObject is used to open a handle to the correct directory and return it. Notice that for session 0, there is a special rule: its directory is simply “\BaseNamedObjects”.

There is a snag in the above handling, as it’s incomplete. UWP processes have their own object directory based on their AppContainer SID, which looks like “\Sessions\1\AppContainerNamedObjects\{AppContainerSid}”, which the code above is not dealing with. I’ll leave that as an exercise for the interested coder.

Back in CreateDataStack – the session-relative directory handle is stored in the OBJECT_ATTRIBUTES RootDirectory member. Now we can call the native API:

HANDLE hDataStack;
auto status = NtCreateDataStack(&hDataStack, &attr, maxItemSize, maxItemCount, maxSize);
if (NT_SUCCESS(status))
	return hDataStack;

SetLastError(RtlNtStatusToDosError(status));
return nullptr;

If we get a failed status, we convert it to a Win32 error with RtlNtStatusToDosError and call SetLastError to make it available to the caller via the usual GetLastError. Here is the full CreateDataStack function for easier reference:

HANDLE CreateDataStack(_In_opt_ SECURITY_ATTRIBUTES* sa, 
    _In_ ULONG maxItemSize, _In_ ULONG maxItemCount, 
    _In_ ULONG_PTR maxSize, _In_opt_ PCWSTR name) {
	UNICODE_STRING uname{};
	if (name && *name) {
		RtlInitUnicodeString(&uname, name);
	}
	OBJECT_ATTRIBUTES attr;
	InitializeObjectAttributes(&attr, 
		uname.Length ? &uname : nullptr, 
		OBJ_CASE_INSENSITIVE | (sa && sa->bInheritHandle ? OBJ_INHERIT : 0) | (uname.Length ? OBJ_OPENIF : 0),
		uname.Length ? GetUserDirectoryRoot() : nullptr, 
		sa ? sa->lpSecurityDescriptor : nullptr);
	
	HANDLE hDataStack;
	auto status = NtCreateDataStack(&hDataStack, &attr, maxItemSize, maxItemCount, maxSize);
	if (NT_SUCCESS(status))
		return hDataStack;

	SetLastError(RtlNtStatusToDosError(status));
	return nullptr;
}

Next, we need to handle the native implementation. Since we just call our driver, we package the arguments in a helper structure and send it to the driver via NtDeviceIoControlFile:

NTSTATUS NTAPI NtCreateDataStack(_Out_ PHANDLE DataStackHandle,
    _In_opt_ POBJECT_ATTRIBUTES DataStackAttributes, 
    _In_ ULONG MaxItemSize, _In_ ULONG MaxItemCount, ULONG_PTR MaxSize) {
	DataStackCreate data;
	data.MaxItemCount = MaxItemCount;
	data.MaxItemSize = MaxItemSize;
	data.ObjectAttributes = DataStackAttributes;
	data.MaxSize = MaxSize;

	IO_STATUS_BLOCK ioStatus;
	return NtDeviceIoControlFile(g_hDevice, nullptr, nullptr,
        nullptr, &ioStatus, IOCTL_DATASTACK_CREATE, 
        &data, sizeof(data), DataStackHandle, sizeof(HANDLE));
}

Where is g_Device coming from? When our DataStack.Dll is loaded into a process, we can open a handle to the device exposed by the driver (which we have yet to implement). In fact, if we can’t obtain a handle, the DLL should fail to load:

HANDLE g_hDevice = INVALID_HANDLE_VALUE;

bool OpenDevice() {
	UNICODE_STRING devName;
	RtlInitUnicodeString(&devName, L"\\Device\\KDataStack");
	OBJECT_ATTRIBUTES devAttr;
	InitializeObjectAttributes(&devAttr, &devName, 0, nullptr, nullptr);
	IO_STATUS_BLOCK ioStatus;
	return NT_SUCCESS(NtOpenFile(&g_hDevice, GENERIC_READ | GENERIC_WRITE, &devAttr, &ioStatus, 0, 0));
}

void CloseDevice() {
	if (g_hDevice != INVALID_HANDLE_VALUE) {
		CloseHandle(g_hDevice);
		g_hDevice = INVALID_HANDLE_VALUE;
	}
}

BOOL APIENTRY DllMain(HMODULE hModule, DWORD reason, LPVOID) {
	switch (reason) {
		case DLL_PROCESS_ATTACH:
			DisableThreadLibraryCalls(hModule);
			return OpenDevice();

		case DLL_THREAD_ATTACH:
		case DLL_THREAD_DETACH:
		case DLL_PROCESS_DETACH:
			CloseDevice();
			break;
	}
	return TRUE;
}

OpenDevice uses the native NtOpenFile to open a handle, as the driver does not provide a symbolic link to make it slightly harder to reach it directly from user mode. If OpenDevice returns false, the DLL will unload.

Kernel Space

Now we move to the kernel side of things. Our driver must create a device object and expose IOCTLs for calls made from user mode. The additions to DriverEntry are pretty standard:

extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
	UNREFERENCED_PARAMETER(RegistryPath);

	auto status = DsCreateDataStackObjectType();
	if (!NT_SUCCESS(status)) {
		return status;
	}

	UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\KDataStack");
	PDEVICE_OBJECT devObj;
	status = IoCreateDevice(DriverObject, 0, &devName, FILE_DEVICE_UNKNOWN, 0, FALSE, &devObj);
	if (!NT_SUCCESS(status))
		return status;

	DriverObject->DriverUnload = OnUnload;
	DriverObject->MajorFunction[IRP_MJ_CREATE] = 
    DriverObject->MajorFunction[IRP_MJ_CLOSE] =
		[](PDEVICE_OBJECT, PIRP Irp) -> NTSTATUS {
		Irp->IoStatus.Status = STATUS_SUCCESS;
		IoCompleteRequest(Irp, IO_NO_INCREMENT);
		return STATUS_SUCCESS;
		};

	DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = OnDeviceControl;

	return STATUS_SUCCESS;
}

The driver creates a single device object with the name “\Device\DataStack” that was used in DllMain to open a handle to that device. IRP_MJ_CREATE and IRP_MJ_CLOSE are supported to make the driver usable. Finally, IRP_MJ_DEVICE_CONTROL handling is set up (OnDeviceControl).

The job of OnDeviceControl is to propagate the data provided by helper structures to the real implementation of the native APIs. Here is the code that covers IOCTL_DATASTACK_CREATE:

NTSTATUS OnDeviceControl(PDEVICE_OBJECT, PIRP Irp) {
	auto stack = IoGetCurrentIrpStackLocation(Irp);
	auto& dic = stack->Parameters.DeviceIoControl;
	auto len = 0U;
	auto status = STATUS_INVALID_DEVICE_REQUEST;

	switch (dic.IoControlCode) {
		case IOCTL_DATASTACK_CREATE:
		{
			auto data = (DataStackCreate*)Irp->AssociatedIrp.SystemBuffer;
			if (dic.InputBufferLength < sizeof(*data)) {
				status = STATUS_BUFFER_TOO_SMALL;
				break;
			}
			HANDLE hDataStack;
			status = NtCreateDataStack(&hDataStack, 
                data->ObjectAttributes, 
                data->MaxItemSize, 
                data->MaxItemCount, 
                data->MaxSize);
			if (NT_SUCCESS(status)) {
				len = IoIs32bitProcess(Irp) ? sizeof(ULONG) : sizeof(HANDLE);
				memcpy(data, &hDataStack, len);
			}
			break;
		}
	}

	Irp->IoStatus.Status = status;
	Irp->IoStatus.Information = len;
	IoCompleteRequest(Irp, IO_NO_INCREMENT);
	return status;
}

NtCreateDataStack is called with the unpacked arguments. The only trick here is the use of IoIs32bitProcess to check if the calling process is 32-bit. If so, 4 bytes should be copied back as the handle instead of 8 bytes.

The real work of creating a DataStack object (finally), falls on NtCreateDataStack. First, we need to have a structure that manages DataStack objects. Here it is:

struct DataStack {
	LIST_ENTRY Head;
	FAST_MUTEX Lock;
	ULONG Count;
	ULONG MaxItemCount;
	ULONG_PTR Size;
	ULONG MaxItemSize;
	ULONG_PTR MaxSize;
};

The details are not important now, since we’re dealing with object creation only. But we should initialize the structure properly when the object is created. The first major step is telling the kernel to create a new object of DataStack type:

NTSTATUS NTAPI NtCreateDataStack(_Out_ PHANDLE DataStackHandle,
    _In_opt_ POBJECT_ATTRIBUTES DataStackAttributes, 
    _In_ ULONG MaxItemSize, _In_ ULONG MaxItemCount, ULONG_PTR MaxSize) {
	auto mode = ExGetPreviousMode();
	extern POBJECT_TYPE g_DataStackType;
	//
	// sanity check
	//
	if (g_DataStackType == nullptr)
		return STATUS_NOT_FOUND;

	DataStack* ds;
	auto status = ObCreateObject(mode, g_DataStackType, DataStackAttributes, mode, 
		nullptr, sizeof(DataStack), 0, 0, (PVOID*)&ds);
	if (!NT_SUCCESS(status)) {
		KdPrint(("Error in ObCreateObject (0x%X)\n", status));
		return status;
	}

ObCreateObject looks like this:

NTSTATUS NTAPI ObCreateObject(
	_In_ KPROCESSOR_MODE ProbeMode,
	_In_ POBJECT_TYPE ObjectType,
	_In_opt_ POBJECT_ATTRIBUTES ObjectAttributes,
	_In_ KPROCESSOR_MODE OwnershipMode,
	_Inout_opt_ PVOID ParseContext,
	_In_ ULONG ObjectBodySize,
	_In_ ULONG PagedPoolCharge,
	_In_ ULONG NonPagedPoolCharge,
	_Deref_out_ PVOID* Object);

ExGetPreviousMode returns the caller’s mode (UserMode or KernelMode enum values), and based off of that we ask ObCreateObject to make the relevant probing and security checks. ObjectType is our DataStack type object, ObjectBodySize is sizeof(DataStack), our data structure. The last parameter is where the object pointer is returned.

If this succeeds, we need to initialize the structure appropriately, and then add the object to the system “officially”, where the object header would be built as well:

DsInitializeDataStack(ds, MaxItemSize, MaxItemCount, MaxSize);
HANDLE hDataStack;
status = ObInsertObject(ds, nullptr, DATA_STACK_ALL_ACCESS, 0, nullptr, &hDataStack);
if (NT_SUCCESS(status)) {
	*DataStackHandle = hDataStack;
}
else {
	KdPrint(("Error in ObInsertObject (0x%X)\n", status));
}
return status;

DsInitializeDataStack is a helper function to initialize an empty DataStack:

void DsInitializeDataStack(DataStack* DataStack, ULONG MaxItemSize, ULONG MaxItemCount, ULONG_PTR MaxSize) {
	InitializeListHead(&DataStack->Head);
	ExInitializeFastMutex(&DataStack->Lock);
	DataStack->Count = 0;
	DataStack->MaxItemCount = MaxItemCount;
	DataStack->Size = 0;
	DataStack->MaxItemSize = MaxItemSize;
	DataStack->MaxSize = MaxSize;
}

This is it for CreateDataStack and its chain of called functions. Handling OpenDataStack is similar, and simpler, as the heavy lifting is done by the kernel.

Opening an Existing DataStack Object

OpenDataStack attempts to open a handle to an existing DataStack object by name:

HANDLE OpenDataStack(_In_ ACCESS_MASK desiredAccess, _In_ BOOL inheritHandle, _In_ PCWSTR name) {
	if (name == nullptr || *name == 0) {
		SetLastError(ERROR_INVALID_NAME);
		return nullptr;
	}

	UNICODE_STRING uname;
	RtlInitUnicodeString(&uname, name);
	OBJECT_ATTRIBUTES attr;
	InitializeObjectAttributes(&attr,
		&uname,
		OBJ_CASE_INSENSITIVE | (inheritHandle ? OBJ_INHERIT : 0),
		GetUserDirectoryRoot(),
		nullptr);
	HANDLE hDataStack;
	auto status = NtOpenDataStack(&hDataStack, desiredAccess, &attr);
	if (NT_SUCCESS(status))
		return hDataStack;

	SetLastError(RtlNtStatusToDosError(status));
	return nullptr;
}

Again, from a high-level perspective it looks similar to APIs like OpenSemaphore or OpenEvent. NtOpenDataStack will make a call to the driver via NtDeviceIoControlFile, packing the arguments:

NTSTATUS NTAPI NtOpenDataStack(_Out_ PHANDLE DataStackHandle, 
    _In_ ACCESS_MASK DesiredAccess, 
    _In_ POBJECT_ATTRIBUTES DataStackAttributes) {
	DataStackOpen data;
	data.DesiredAccess = DesiredAccess;
	data.ObjectAttributes = DataStackAttributes;

	IO_STATUS_BLOCK ioStatus;
	return NtDeviceIoControlFile(g_hDevice, nullptr, nullptr, nullptr, &ioStatus,
		IOCTL_DATASTACK_OPEN, &data, sizeof(data), DataStackHandle, sizeof(HANDLE));
}

Finally, the implementation of NtOpenDataStack in the kernel is surprisingly simple:

NTSTATUS NTAPI NtOpenDataStack(_Out_ PHANDLE DataStackHandle, 
    _In_ ACCESS_MASK DesiredAccess, 
    _In_ POBJECT_ATTRIBUTES DataStackAttributes) {
	return ObOpenObjectByName(DataStackAttributes, g_DataStackType, ExGetPreviousMode(),
		nullptr, DesiredAccess, nullptr, DataStackHandle);
}

The simplicity is thanks to the generic ObOpenObjectByName kernel API, which is not documented, but is exported, that attempts to open a handle to any named object:

NTSTATUS ObOpenObjectByName(
	_In_ POBJECT_ATTRIBUTES ObjectAttributes,
	_In_ POBJECT_TYPE ObjectType,
	_In_ KPROCESSOR_MODE AccessMode,
	_Inout_opt_ PACCESS_STATE AccessState,
	_In_opt_ ACCESS_MASK DesiredAccess,
	_Inout_opt_ PVOID ParseContext,
	_Out_ PHANDLE Handle);

That’s it for creating and opening a DataStack object. Let’s test it!

Testing

After deploying the driver to a test machine, we can write simple code to create a DataStack object (named or unnamed), and see if it works. Then, we’ll close the handle:

#include <Windows.h>
#include <stdio.h>
#include "..\DataStack\DataStackAPI.h"

int main() {
	HANDLE hDataStack = CreateDataStack(nullptr, 0, 100, 10 << 20, L"MyDataStack");
	if (!hDataStack) {
		printf("Failed to create data stack (%u)\n", GetLastError());
		return 1;
	}

	printf("Handle created: 0x%p\n", hDataStack);

	auto hOpen = OpenDataStack(GENERIC_READ, FALSE, L"MyDataStack");
	if (!hOpen) {
		printf("Failed to open data stack (%u)\n", GetLastError());
		return 1;
	}

	CloseHandle(hDataStack);
	CloseHandle(hOpen);
	return 0;
}

Here is what Process Explorer shows when the handle is open, but not yet closed:

Let’s check the kernel debugger:

kd> !object \Sessions\2\BaseNamedObjects\MyDataStack
Object: ffffc785bb6e8430  Type: (ffffc785ba4fd830) DataStack
    ObjectHeader: ffffc785bb6e8400 (new version)
    HandleCount: 1  PointerCount: 32769
    Directory Object: ffff92013982fe70  Name: MyDataStack
lkd> dt nt!_OBJECT_TYPE ffffc785ba4fd830
   +0x000 TypeList         : _LIST_ENTRY [ 0xffffc785`bb6e83e0 - 0xffffc785`bb6e83e0 ]
   +0x010 Name             : _UNICODE_STRING "DataStack"
   +0x020 DefaultObject    : (null) 
   +0x028 Index            : 0x4c 'L'
   +0x02c TotalNumberOfObjects : 1
   +0x030 TotalNumberOfHandles : 1
   +0x034 HighWaterNumberOfObjects : 1
   +0x038 HighWaterNumberOfHandles : 2
...

After opening the second handle (by name), the debugger reports two handles (different run):

lkd> !object ffffc585f68e25f0
Object: ffffc585f68e25f0  Type: (ffffc585ee55df10) DataStack
    ObjectHeader: ffffc585f68e25c0 (new version)
    HandleCount: 2  PointerCount: 3
    Directory Object: ffffaf8deb3c60a0  Name: MyDataStack

The source code can be found here.

In future parts, we’ll implement the actual DataStack functionality.

Creating Kernel Object Type (Part 1)

Windows provides much of its functionality via kernel objects. Common examples are processes, threads, mutexes, semaphores, sections, and many more. We can see the object types supported on a particular Windows system by using a tool such as Object Explorer, or in a more limited way – WinObj. Here is a view from Object Explorer:

Every object type has a name (e.g. “Process”), an index, the pool to use for allocating memory for these kind of objects (typically Non Paged pool or Paged pool), and a few more properties. The “dynamic” part in the above screenshot shows that an object type keeps track of the number of objects and handles currently used for that type. Object types are themselves objects, as is perhaps evident from the fact that there is an object type called “Type” (index 2, the first one), and its number of “objects” is the number of object types supported on this version of Windows.

User mode clients use kernel objects by invoking APIs. For example, CreateProcess creates a process object and a thread object, returning handles to both. OpenProcess, on the other hand, tries to obtain a handle to an existing process object, given its process ID and the access mask requested. Similar APIs exist for other object types. All this should be fairly familiar to reader of this blog.

A New Kernel Object Type

Can we create a new kernel object type, that provides some useful functionality to user-mode (and kernel-mode) clients? Perhaps we should first ask, why would we want to do that? As an alternative, we can create a kernel driver that exposes the desired functionality through I/O control codes, invoked with DeviceIoControl by clients. We can certainly create nice wrappers that provide nicer-looking functions so that clients do not need to see the DeviceIoControl calls.

I can see at two reasons to go the “new object type” approach:

  • It’s a great learning opportunity, and could be fun 🙂
  • We get lots of things for free, such as handle and objects management (handle count, ref count), sharing capabilities, just like any other kernel object, and some common APIs clients are already familiar with, like CloseHandle.

OK, let’s assume we want to go down this route. How do we create an object type, or a “generic” kernel object, for that matter. As it turns out, the kernel functions needed are exported, but they are not documented. We’ll use them anyway 😉

We’ll create an Empty WDM Driver in Visual Studio, delete the INF file so we are left with an empty project with the correct settings for the compiler and linker. For more information on creating driver projects, consult the official documentation, or my book, “Windows Kernel Programming, 2nd edition“.

We’ll add a C++ file to the project, and write our DriverEntry routine. At first, its only job is to create a new type object:

extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING) {
    DriverObject->DriverUnload = OnUnload;
    return DsCreateDataStackObjectType();
}

The kernel object type we’ll implement is called “DataStack”, and it’s supposed to provide a stack-like functionality. You may be wondering what’s so special about that? Every language/library under the sun has some stack data structure. Implemented as kernel objects, these data stacks offer some benefits that are normally unavailable:

  • Thread Synchronization, so the data stack can be accessed freely by clients. Data race prevention is the burden of the implementation.
  • These data stacks can be shared between processes, something not offered by any stack implementation you find in languages/libraries.

You could argue that it would be possible to implement such data stacks on top of Section (File Mapping) objects, which allow sharing of memory, with some API that does all the heavy lifting. This is true in essence, but not ideal. The section could be misused by accessing it directly without regard to the data stack implemented. And besides, it’s not the point. You could come up with another kind of object that would not lend itself to easy implementation in other ways.

Back to DriverEntry: The only call is to a helper function, whose purpose is to create the object type. Creating an object type is done with ObCreateObjectType, declared like so:

NTSTATUS NTAPI ObCreateObjectType(
	_In_ PUNICODE_STRING TypeName,
	_In_ POBJECT_TYPE_INITIALIZER ObjectTypeInitializer,
	_In_opt_ PSECURITY_DESCRIPTOR sd,
	_Deref_out_ POBJECT_TYPE* ObjectType);

TypeName is the type name for the new type, which must be unique. ObjectTypeInitializer provides various properties for the type object, some of which we’ll examine momentarily. sd is an optional security descriptor to assign to the new type object, where NULL means the kernel will provide an appropriate default. ObjectType is the returned object type pointer, if successful. The POBJECT_TYPE is not defined in all its glory in the WDK headers, so it’s treated as a PVOID, but that’s fine. We won’t need to look inside.

The simplest way to create the type object would be like so:

UNICODE_STRING typeName = RTL_CONSTANT_STRING(L"DataStack");
OBJECT_TYPE_INITIALIZER init{ sizeof(init) };
auto status = ObCreateObjectType(&typeName, &init, nullptr,
    &g_DataStackType);

The OBJECT_TYPE_INITIALIZER has a Length first member, which must be initialized to the size of the structure, as is common in many Windows APIs. The rest of the structure is zeroed out, which is good enough for our first attempt. The returned pointer lands in a global variable (g_DataStackType), that we can use if needed.

The Unload routine may try to remove the new object type like so:

void OnUnload(PDRIVER_OBJECT DriverObject) {
    UNREFERENCED_PARAMETER(DriverObject);
    ObDereferenceObject(g_DataStackType);
}

Let’s see the effect this could has when the driver is deployed and loaded. First, Here is Object Explorer on a Windows 11 VM where the driver is deployed:

Notice the new object type, with an index of 76 and the name “DataStack”. There are zero objects and zero handles for this kind of object right now (no big surprise there). Let’s see what the kernel debugger has to say:

lkd> !object \objecttypes\datastack
Object: ffff8385bd5ff570 Type: (ffff8385b26a8d00) Type
ObjectHeader: ffff8385bd5ff540 (new version)
HandleCount: 0 PointerCount: 2
Directory Object: ffffc9067828b730 Name: DataStack

Clearly there is such a object type, and it has 2 references, one of which we are holding in our kernel variable. We can examine the object in its more specific role as a object type:

lkd> dt nt!_OBJECT_TYPE ffff8385bd5ff570
   +0x000 TypeList         : _LIST_ENTRY [ 0xffff8385`bd5ff570 - 0xffff8385`bd5ff570 ]
   +0x010 Name             : _UNICODE_STRING "DataStack"
   +0x020 DefaultObject    : (null) 
   +0x028 Index            : 0x4c 'L'
   +0x02c TotalNumberOfObjects : 0
   +0x030 TotalNumberOfHandles : 0
   +0x034 HighWaterNumberOfObjects : 0
   +0x038 HighWaterNumberOfHandles : 0
   +0x040 TypeInfo         : _OBJECT_TYPE_INITIALIZER
   +0x0b8 TypeLock         : _EX_PUSH_LOCK
   +0x0c0 Key              : 0x61746144
   +0x0c8 CallbackList     : _LIST_ENTRY [ 0xffff8385`bd5ff638 - 0xffff8385`bd5ff638 ]
   +0x0d8 SeMandatoryLabelMask : 0
   +0x0dc SeTrustConstraintMask : 0

Note the Name and the fact that there are zero objects and handles.

Let’s see what happens when we unload the driver. Since we’re dereferencing our reference (g_DataStackType), the object type is still alive, as the kernel holds to another reference (this was generated in a different run of the system, so the addresses are not the same):

lkd> !object \objecttypes\datastack
Object: ffffc081d94fd330 Type: (ffffc081cd6af5e0) Type
ObjectHeader: ffffc081d94fd300 (new version)
HandleCount: 0 PointerCount: 1
Directory Object: ffff968b2c233a10 Name: DataStack

Why do we have another reference? The type object is permanent, as we can see if we examine its object header:

lkd> dt nt!_OBJECT_HEADER ffffc081d94fd300
+0x000 PointerCount : 0n1
+0x008 HandleCount : 0n0
+0x008 NextToFree : (null)
+0x010 Lock : _EX_PUSH_LOCK
+0x018 TypeIndex : 0xc5 ''
+0x019 TraceFlags : 0 ''
+0x019 DbgRefTrace : 0y0
+0x019 DbgTracePermanent : 0y0
+0x01a InfoMask : 0x3 ''
+0x01b Flags : 0x13 ''
+0x01b NewObject : 0y1
+0x01b KernelObject : 0y1
+0x01b KernelOnlyAccess : 0y0
+0x01b ExclusiveObject : 0y0
+0x01b PermanentObject : 0y1
…

We could remove the “permanent” flag from the type object by making it “temporary” (a.k.a. normal) in our Unload routine, like so:

HANDLE hType;
auto status = ObOpenObjectByPointer(g_DataStackType,
    OBJ_KERNEL_HANDLE, nullptr, 0, nullptr, KernelMode, &hType);
if (NT_SUCCESS(status)) {
    status = ZwMakeTemporaryObject(hType);
    ZwClose(hType);
}
ObDereferenceObject(g_DataStackType);

Calling ZwMakeTemporaryObject (a documented API) removes the permanent bit, so that ObDereferenceObject removes the last reference of the DataStack object type. Unfortunately, this works too well – it also causes the system to crash (BSOD), and that’s because the kernel does not expect type objects to be deleted. It makes sense, since objects of that type may still be alive. Even if the kernel could determine that no objects of that type are alive right now, and allow the deletion, what would that mean for future creations? Worse, it’s possible to create objects privately without a header (very common in the kernel), which means the kernel is unaware of these objects to begin with. The bottom line is, type objects cannot be destroyed safely. In our case, it means the driver should remain alive at all times, but regardless, it should not attempt to destroy the type object.

Object Type Customization

The code to create the DataStack object type did not do any customizations. Possible customizations are available via the OBJECT_TYPE_INITIALIZER structure:

typedef struct _OBJECT_TYPE_INITIALIZER {
	USHORT Length;
	union {
		USHORT Flags;
		struct {
			UCHAR CaseInsensitive : 1;
			UCHAR UnnamedObjectsOnly : 1;
			UCHAR UseDefaultObject : 1;
			UCHAR SecurityRequired : 1;
			UCHAR MaintainHandleCount : 1;
			UCHAR MaintainTypeList : 1;
			UCHAR SupportsObjectCallbacks : 1;
			UCHAR CacheAligned : 1;
			UCHAR UseExtendedParameters : 1;
			UCHAR _Reserved : 7;
		};
	};

	ULONG ObjectTypeCode;
	ULONG InvalidAttributes;
	GENERIC_MAPPING GenericMapping;
	ULONG ValidAccessMask;
	ULONG RetainAccess;
	POOL_TYPE PoolType;
	ULONG DefaultPagedPoolCharge;
	ULONG DefaultNonPagedPoolCharge;
	OB_DUMP_METHOD DumpProcedure;
	OB_OPEN_METHOD OpenProcedure;
	OB_CLOSE_METHOD CloseProcedure;
	OB_DELETE_METHOD DeleteProcedure;
	OB_PARSE_METHOD ParseProcedure;
	OB_SECURITY_METHOD SecurityProcedure;
	OB_QUERYNAME_METHOD QueryNameProcedure;
	OB_OKAYTOCLOSE_METHOD OkayToCloseProcedure;
	ULONG WaitObjectFlagMask;
	USHORT WaitObjectFlagOffset;
	USHORT WaitObjectPointerOffset;
} OBJECT_TYPE_INITIALIZER, * POBJECT_TYPE_INITIALIZER;

This is quite a structure. The various *Procedure members are callbacks. You can find their prototypes in the ReactOS source code, but for now you can just replace all of them with an opaque PVOID to make it easier to deal with the structure. At this point, we’ll customize our object type’s creation like so:

UNICODE_STRING typeName = RTL_CONSTANT_STRING(L"DataStack");
OBJECT_TYPE_INITIALIZER init{ sizeof(init) };
init.PoolType = NonPagedPoolNx;
init.DefaultNonPagedPoolCharge = sizeof(DataStack);
init.ValidAccessMask = DATA_STACK_ALL_ACCESS;
GENERIC_MAPPING mapping{
    STANDARD_RIGHTS_READ | DATA_STACK_QUERY,
    STANDARD_RIGHTS_WRITE | DATA_STACK_PUSH | DATA_STACK_POP | 
        DATA_STACK_CLEAR,
    STANDARD_RIGHTS_EXECUTE | SYNCHRONIZE, DATA_STACK_ALL_ACCESS
};
init.GenericMapping = mapping;

auto status = ObCreateObjectType(&typeName, &init, nullptr,
    &g_DataStackType);

The PoolType member indicates from which pool objects of this type should be allocated. I’ve selected the Non Paged pool with no execute allowed. DataStack is the structure we’ll use for the implementation of the type. We’ll see what that looks like in the next post. For now, we indicate to the kernel that the base memory consumption of objects of DataStack type is the size of that structure.

Next, we see some constants being used that I have defined in a file that is going to be shared between the driver and user mode clients that has some definitions, similar to other object kinds:

#define DATA_STACK_QUERY	0x1
#define DATA_STACK_PUSH		0x2
#define DATA_STACK_POP		0x4
#define DATA_STACK_CLEAR	0x8

#define DATA_STACK_ALL_ACCESS (STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | DATA_STACK_QUERY | DATA_STACK_PUSH | DATA_STACK_POP | DATA_STACK_CLEAR)

These #defines provide specific access mask bits for objects of the DataStack type. We’ll use these in the implementation so that only powerful-enough handles would allow the relevant access. In the object type creation we use these in the ValidAccessMask member to indicate what is valid to request by clients, and also to provide generic mapping. Generic mapping is a standard feature used by Windows to map generic rights (GENERIC_READ, GENERIC_WRITE, GENERIC_EXECUTE, and GENERIC_ALL) to specific rights appropriate for the object type. You can see these mappings in Object Explorer for all object types. For example, if a client asks for GENERIC_READ when opening a DataStack object, the access requested is going to be DATA_STACK_QUERY.

What’s Next?

We have an object type, that’s great! But we can’t create objects of this type, nor use it in any way. We’re missing the actual implementation. From a user-mode perspective, we’d like to expose an API, not much different in spirit than other object types:

NTSTATUS NTAPI NtCreateDataStack(
	_Out_ PHANDLE DataStackHandle, 
	_In_opt_ POBJECT_ATTRIBUTES DataStackAttributes, 
	_In_ ULONG MaxItemSize, 
	_In_ ULONG MaxItemCount, 
	ULONG_PTR MaxSize);
NTSTATUS NTAPI NtOpenDataStack(
	_Out_ PHANDLE DataStackHandle, 
	_In_ ACCESS_MASK DesiredAccess, 
	_In_opt_ POBJECT_ATTRIBUTES DataStackAttributes);
NTSTATUS NTAPI NtQueryDataStack(
	_In_ HANDLE DataStackHandle, 
	_In_ DataStackInformationClass InformationClass, 
	_Out_ PVOID Buffer, 
	_In_ ULONG BufferSize, 
	_Out_opt_ PULONG ReturnLength);
NTSTATUS NTAPI NtPushDataStack(
	_In_ HANDLE DataStackHandle, 
	_In_ PVOID Item, 
	_In_ ULONG ItemSize);
NTSTATUS NTAPI NtPopDataStack(
	_In_ HANDLE DataStackHandle, 
	_Out_ PVOID Buffer, 
	_Inout_ PULONG BufferSize);
NTSTATUS NTAPI NtClearDataStack(_In_ HANDLE DataStackHandle);

if you’re familiar with the Windows Native API, the “spirit” of these DataStack API is the same. The big questions are, how do we implement these APIs – in user mode and kernel mode? We’ll look into it in the next post.

I have not provided a prebuilt project for this part. Feel free to type things yourself, as there is not too much code at this point. In the next post, I’ll provide a Github repo that has all the code. See you then!

Rust – the Ultimate Programming Language?

Rust has been reported as “most loved language” by some developer surveys, promising safety not available with other mainstream languages. What does that safety really mean? Let’s look deeper.

The idea of a “safe” language has been a hot topic for some time now, where Rust apparently reigns supreme. But what is that “safety”? The primary safety referred to is memory safety.

Languages such as C and C++ allow the programmer to access any memory location without any checks. For example, you can access an array element beyond the size of the array and the compiler wouldn’t know – it will happily compile the code, only to crash at runtime, if you’re lucky. If you are unlucky, some random memory would be accessed, which may cause incorrect behavior, which is worse. Rust guarantees that such random memory access will not compile, or if it does compile, it will panic (crash) at runtime rather than accessing memory outside the bounds of the array.

Is Rust the only language that provides this kind of memory safety? Not really. Other languages provide similar safety guarantees, such as Python, C#, and Java. One extra nicety you get with Rust is the unavailability of a null reference/pointer – there is no null in safe Rust.

“Safe” Rust? Rust has two personalities – one being Safe Rust – the “normal” Rust developers typically use, but there is a second, mostly hidden part of Rust – unsafe Rust. Unsafe Rust has all the capabilities (and dangers) of unsafe languages, just like C and C++. With unsafe Rust, you can (attempt) to access any memory location, and do things the Rust compiler will not normally allow. Ideally, Rust developers won’t use unsafe Rust, but nothing stops them from doing so. A canonical example is mutating a global variable – it must be done in an unsafe context:

static mut myglobal:i32 = 0;

fn do_work() {
    //myglobal += 1;  // does not compile

    unsafe {
        myglobal += 1;
    }
}

Unsafe is the power behind safe Rust. At the end of the day, hardware (like CPUs) is “unsafe” – it will do anything it’s told, attempt to access any memory location, etc. This is why safe Rust is provided – it wraps (in many cases) unsafe code, which is necessary to get things done. Safe Rust must trust Unsafe Rust implicitly – the compiler has no way of verifying the correctness of unsafe code. As long as developers use safe Rust, they get a guarantee from the Rust compiler and well-debugged unsafe Rust libraries, that their code would not misbehave as it relates to memory safety.

The Second Rust Safety

You may be wondering why, in the above example, is the access to a mutable global variable requires an unsafe context? This brings in the second part of Rust safety, which really sets it apart from other memory-safe languages – safety in a concurrent environment.

Typical code is multithreaded these days, at least for the purpose of utilizing today’s multicore systems. Writing correct multithreaded code, however, is far from trivial. Latent bugs may lurk in code for years, only manifesting at what appears to be random times, making these bugs difficult to fix as it’s difficult to reproduce. Bugs that cannot be reliably reproduced cannot be truly fixed.

For example, languages such as C# and Java will allow you to access shared data concurrently from multiple threads, where at least one of the threads is writing to the shared data. This is the definition of a data race, something to avoid at all costs. These languages provide facilities to deal with data races, but there is no enforcement in using them correctly (or at all). For example, C# has the lock keyword that can be used to execute a block of code by one thread at a time, avoiding a data race if that block accesses the shared data. What happens if another block of code accesses the same shared data without using the correct lock statement (using the same internal lock) or no lock at all? You get a data race again, and because of timing, it would occur at unpredictable times, making it difficult to identify, let alone fix.

This is where Rust offers a different approach. Its famous ownership model, which caters for memory safety is also the one providing concurrency safety. If you try to access shared data from multiple threads, the compiler will refuse to compile the code. This is exactly what happens in the global variable example. The compiler cannot prove that every piece of code accesses the global variable in a multithreaded-safe way. To use safe code, the data to be protected can be wrapped in a Mutex:

static myglobal2: Mutex<i32> = Mutex::new(0);

fn do_work() {
    let mut data = myglobal2.lock().unwrap();
    *data += 1;
}

Notice the Mutex is not mutable. With the above code, there is no way to access the shared data without going through the mutex. This is really powerful, and is not offered by other memory-safe languages. The code may look a bit weird, “dereferencing” the integer, but this is necessary in this case as the returned object from lock().unwrap() is not an integer reference, but a wrapper type where the deref operator is overloaded. With an i32, a Mutex is not really needed, as atomic<i32> would be more efficient and easier to use, but with more interesting objects, this is not too bad, such as the following thread-safe access to a Vec<i32>:

static mydata: Mutex<Vec<i32>> = Mutex::new(Vec::new());

fn do_work() {
    let mut v = mydata.lock().unwrap();
    v.push(42);
}

Is it all Rainbows and Unicorns?

Not quite. Rust is known for its steep learning curve. As any beginning Rust developer knows, it sometimes feels like a fight between the developer and the Rust compiler. With time, developers learn to appreciate the compiler, as it makes sure nasty stuff does not happen at runtime. But because of this strong guarantee, seemingly simple things turn out to be not so simple. For example, consider this simple function that returns the longer of two strings (technically string slices):

fn longest(a: &str, b: &str) -> &str {
    if a.len() > b.len() {
        a
    }
    else {
        b
    }
}

This function fails compilation with an error:

error[E0106]: missing lifetime specifier
  --> src/main.rs:26:33
   |
26 | fn longest(a: &str, b: &str) -> &str {
   |               ----     ----     ^ expected named lifetime parameter
   |
   = help: this function's return type contains a borrowed value, but the signature does not say whether it is borrowed from `a` or `b`
help: consider introducing a named lifetime parameter
   |
26 | fn longest<'a>(a: &'a str, b: &'a str) -> &'a str {
   |           ++++     ++          ++          ++

One of the powers of the Rust compiler is its explanations and suggestions, that help in resolving compiler errors in many cases, like this one. Lifetime specifiers introduce complexities when working with references (to whatever), because the compiler must be sure that the references don’t potentially outlive the objects they are referencing. If the compiler can’t prove it, it will fail compilation. The problem with the above code is that the compiler can’t tell which reference would be returned from calls to the longest function. Every call could return either a reference to a or to b. But why is that a problem? Here is an example:

let s1 = String::from("Hello");
let s3;

{
    let s2 = String::from("Goodbye");
    s3 = longest(&s1, &s2);
}

println!("Longest: {}", s3);

Here, s3 should reference either s1 or s2, but s2 only lives in the inner scope, which means that s3 potentially could reference a dead string; the Rust compiler cannot take that chance. Lifetime annotations tell the compiler what to expect:

fn longest<'a>(a: &'a str, b: &'a str) -> &'a str {
    if a.len() > b.len() {
        a
    }
    else {
        b
    }
}

The syntax looks weird before you get used to it (I won’t get into the details here), but the annotations specify that a, b and the returned reference all have the same lifetime. This makes the compiler happy in that it knows what is allowed to be passed in when calling longest. In particular, it will fail compilation of the call to longest above with the following error:

error[E0597]: `s2` does not live long enough
  --> src/main.rs:18:27
   |
17 |         let s2 = String::from("Goodbye");
   |             -- binding `s2` declared here
18 |         s3 = longest(&s1, &s2);
   |                           ^^^ borrowed value does not live long enough
19 |     }
   |     - `s2` dropped here while still borrowed
20 |
21 |     println!("Longest: {}", s3);
   |                             -- borrow later used here

This means that if s1 and s2 don’t have the same lifetimes, the code fails to compile.

Fortunately, lifetime annotations are not needed in the majority of cases (there are a few lifetime elision rules that help), and in many cases references are not returned or stored. Still, this is another new concept Rust developers need to learn which has no equivalent in other languages.

So, is Rust the Ultimate Language?

It’s not enough to have an elegant, performant, or whatever programming language to succeed. There are other factors, which are even more important. Just look at Python – it has terrible performance, but still extremely popular. Here are some factors I consider necessary for a language to thrive:

  • The language itself
  • Libraries availability
  • Ecosystem and community
  • Tooling

Libraries are critical – reinventing the wheel is not an option if software is to be delivered reliably and quickly enough. For Rust, the library ecosystem is large and growing rapidly. Just look at https://crates.io.

Tooling is another major one. Rust has its own package manager, cargo, which is extremely flexible and powerful, adding a lot to the ease of using external packages, and many other configurations. The other part of “tooling” is an IDE. This is where I thing Rust is still lacking. The most popular IDE used for Rust development is Visual Studio Code, with an extension called rust-analyzer. The extension is powerful and always improving, but VS Code itself is very generic, and is not geared towards Rust in particular.

Personally, myself being a user of Visual Studio mostly, I feel the difference. When working with C# or C++ in Visual Studio, the integration is tight and powerful. The debugger is more flexible and powerful than what VS Code can offer with its generality. Visual Studio “supports” Rust as well with the same rust-analyzer extension that is used in VS Code, but it’s not the same experience, and it feels like a “foreign” extension. Unfortunately, Microsoft doesn’t seem to have a plan for providing first-class support for Rust in Visual Studio even though they claim to use Rust a lot, even within the Windows OS.

One new tool worth mentioning is Jetbrain’s RustRover, which is dedicated to Rust. Personally, I feel some of the Jetbrains tools to be more complex to use than I would have liked, but it’s probably a matter of getting used to these tools.

Conclusion

Rust is going places, that’s for sure. It has lots of power and flexibility, and can be used for anything, from low-level stuff like UEFI application, to web applications, other server application, and the cloud. There are some parts of Rust syntax and philosophy that I don’t completely agree with, but any language has compromises.

There is no “perfect programming language”, of course, although I dreamed of creating one in my youth 🙂